The Slightly Disgruntled Scientist

...now 7% more viral!


Disgruntled Science Policy Roundup 2016

| Comments

It’s almost time for the 2016 Australian Federal Election, which also means it’s time for an extra special double-dissolution edition of my science policy word tantrum!

Labor and Liberal

Ah who cares.

Look, I don’t want to do the too-cool-for-school crap of saying the major parties are exactly the same, because they aren’t. Spoiler alert: Labor is probably a better science vote than Liberal. But not so much that I can be bothered reviewing them separately.

The Liberals are obsessed with all things agile and innovation. But they have no idea what that means, or how science fits into it. They seem to think startups take a mere couple of years to reach success. They struggle to articulate what a startup is, and why eg. hairdressers aren’t one. They don’t seem to understand what it’s like to have an extremely viable business model that fails before it starts because it doesn’t fall neatly into mining or property development. For a party that’s meant to be all about business, they really seem to be at a loss as to how to help businesses. Except obviously rentseeking, that being the entire basis of their governance.

So they can’t conceive of science beyond making money off technology, and they have no idea how people make money off technology. Meanwhile, funding is cut, investment stops, and the smell of sovereign risk is in the air. Another term or two of the Liberal Party might actually see the end of Australian science altogether.

Labor like to give the knife a more gentle twist, giving passionate researchers just enough hope and praise so that they’ll continue to work for free and pay for their own supplies, just like the union movement fought for.

The cable tie that draws these two together is their complete, total, absolute, abject lack of vision when it comes to the potential of Australians. I have personally seen politicians from both sides be utterly shocked that Australian companies still employ people who eg. solder things, or design solid objects, or ship consumer electronics overseas.

This is obvious in their voices on the National Broadband Network, which is almost never discussed as anything but a consumer product. It is apparent in their involvement in the committee and debate on forcing ISPs to retain users' internet activity. It permeates every aspect of their politics, it informs every policy, it infects every budget.

Under the continued neglect of both parties, agencies have been reduced to empty husks. The CSIRO now hires managers at a six-to-one ratio over scientific support or researchers, because all they can do is churn money through meaningless commercialisation initiatives instead of inventing technology to commercialise. NICTA were eviscerated, haemorrhaging staff who simply could not live with the uncertainty, our successive governments so reluctant to commit to funding that they would rather leave tens of millions of dollars of valuable equipment in the sea to rust.

The fact is, science is not going to swing votes in marginal seats for either party, and this makes it invisible to both. It is the easiest part of the budget to cut, and the laziest piece of policy work for either.

(I would like to make a special point of the fact that a policy of “science funding at 3% of GDP” is pretty much what a six year old could come up with by Googling “what is good science policy” for a class report. It has no currency.)

So yeah, who cares.

The Greens

Last time around, my qualified praise for the Greens focused on their science policy (good), and their anti-science dogwhistling (bad).

These things still exist. The policy is there, it’s just as good as last time, although not updating it suggests a little complacency on their part. The anti-science sentiment still comes through in places, and I doubt it’s going away any time soon.

But I feel like there has been a permanent, albeit subtle, shift in the attitude shared by the Greens' parliamentarians. More and more I see technological proficiency and scientific literacy informing their participation, and a great example of this was Greens Senator Scott Ludlam’s work on the mandatory data* retention policy committee. (*There is no such thing as metadata you weasels.)

The Policy Fractal Paradox

| Comments

Once upon a time, when I was a busy and active member of the Australian Democrats, the thing that made me most excited of all was working on science policy. I ran a couple of public forums (that very few people attended) and spoke at the national conference (that only Democrats attended) and wrote a science policy (that only I read).

I had never written a policy before, and I was immediately struck by two questions. Firstly, where do you start? But more importantly (especially at 3am before a conference talk) where do you stop?

Policy was meant to be the thing that differentiated us from those other parties, big and small; those single issue minor parties, those major parties corrupted by vested interests, the dogwhistling senators and backbenchers, those ideological zealots who refuse to negotiate. We were a party that forged intricately detailed policy in the fires of committees and consultation and casting votes.

So where do you stop? If detailed policy is good, when do you stop writing it?

The Paradox

The paradox starts with the idea that detailed, well-researched policies serve as both a reason to take a party seriously, and a reason to dismiss them, depending on the voter. But the cost and benefit of each additional policy detail eventually diminishes and even turns around, as voters weigh their disagreement more highly than their agreement. A voter might not even notice their agreement with many details, it being a kind of common-sense to them.

There is more to it than this; policy details may stand out as weird exceptions to otherwise good policy, undermining how serious the party’s commitment to a particular policy even is. This has always been how I’ve felt about the Greens' opposition to ANSTO in relation to the rest of their science policy: it just sticks out as this sign of Not Getting It. And I will come back to that.

So policies become this ever-unfolding fractal of potential reasons to turn away from a party, more than reasons to support it.

The more I reflected on this, the more it bothered me. Partly because I began to see policy-as-a-platform in this relativistic way, no more valid than identity politics or uncompromising ideology (as though “validity” was not already a subjective idea). But also because I recognised in myself an irrational behaviour, a kind of perfectionism, that made me part of the problem I was trying to solve.

I would read policies point by point, scrutinising every line as a single manifesto. And when I found something I disagreed with, it would go on this mental pile of reasons not to support that particular party. But I didn’t really have a corresponding pile for the good points, or for the overall vision. I would just build up this list of excuses to dismiss a party and never check back.

Talking to voters later on made me realise I wasn’t alone in doing this… or in rationalising it. But it also made me realise that maybe, just maybe, I needed to let go of the idea that policy volume was an unconditionally good thing.

Case in Point: The Perfectly Planned City

The Science Party’s charter city policy might be a good illustration of when you should set down your 1950s orange cased space pen, put away your Futura stencils and stop writing policy.

The policy is simply that there should be a new city, subsidised by the government and funded by industry, focused on science, research, and commercialisation of technology. But… there’s more. The policy also details the location (between Sydney and Canberra), the transport mode to connect it (high speed rail), immigration law exceptions (more of it), and the zoning regulations (eg. minimum population density). Even the name is already thought up: Turing.

Here’s the paradox in action: there are exactly zero voters who would vote for the Science Party after reading the charter city policy who wouldn’t have already voted for the Science party. That is, no one is going to switch votes after reading it. (I feel pretty comfortable making this assertion, so it must be true.) But going the other way: there may well be folk who read this policy, think “WTF,” and look elsewhere.

Even if there aren’t, even if the number of people in the second group is also zero… what’s gained? Have the policy ⇒ possibly lose some voters, or maybe you don’t. Ditch the policy ⇒ no change.

Why do something that has only a cost and no benefit?

Valgrind and GDB: Tame the Wild C

| Comments

One thing I get asked a lot — almost daily, in fact — is: hey, why are you so amazing


…at identifying bugs related to undefined behaviour in C?

The answer is simple: by using Valgrind and GDB!

This tutorial is all about using Valgrind as part of your development workflow. Valgrind is an amazing tool for debugging, and I’ll start off by showing you what it actually does just as a standalone tool. From there I’ll show you how to use it in a systematic way to find errors via a debugger. Finally, you’ll see how you can actually add it to your code, so that you can catch runtime errors that might otherwise be concealed by the logic of your code.

So if, like me, you spend most of your waking life writing, maintaining and debugging embedded C code, then it’s time for you to crack open a console and put on your learning hat, and discover a few tricks that will make your life a great deal easier.

What are all these tools and concepts?

What is undefined behaviour?

This post assumes a basic level of knowledge about C, the standards that govern it, and the concept of undefined behaviour... but if you're new to these concepts, here's a quick summary and some references.

Unlike other languages languages (for example, Java), C programs are not required to keep runtime information about array bounds or whether memory accesses are valid. Neither are they required to initialise data to default values (except in very specific cases). If a programmer is not diligent about these things, their program can do something that is completely invalid — that is, undefined behaviour.

Undefined behaviour in a running C program means nothing less than: it is no longer possible to reason about your program. It simply isn’t. Cries of but it caaaaan’t be doing that! or shouldn’t x just be the last value? or I didn’t even have monkeys living in the server to begin with! mean nothing in the face of undefined behaviour.

This makes debugging very, very hard.

If you want to know more about undefined behaviour, refer to:

What is Valgrind? What is Memcheck?

Valgrind is not a single tool, but rather a set of tools for checking memory errors, cache usage, heap usage and other runtime behaviours, usually in C programs. This post focuses on Memcheck, a tool for identifying invalid or incorrect use of memory (stack or heap).

I am actually going to use the terms "Valgrind" and "Memcheck" interchangeably, since Memcheck is the default tool Valgrind uses when you run the command valgrind. Just be aware that there are other tools in there too.

The bugs that I’ve found using Valgrind are the worst of the worst, straight out of the C hall of shame. We’re talking about bugs that:

  • only appear on one person’s machine
  • seem to happen randomly, even in the same environment
  • don’t cause crashes, just give you the wrong output
  • crash, but the stack trace looks totally wrong (How did it crash there? I changed code somewhere else entirely!)
  • only occur at certain optimisation levels
  • only occur with newer compiler versions

Valgrind works by running your executable on a synthetic processer, and whichever tool you’ve selected inserts its own instrumentation code as it runs. You don’t need to recompile with Valgrind, or link with special libraries, or even run debugging builds (although it’s almost always the case that you should). Valgrind runs are significantly slower than normal runs though: about 50 times slower.

But since it cuts your debugging time down by a factor of about a thousand, it’s probably worth it.

The reality of basic science: technology is not alive

| Comments

This is a partial rebuttal of Matt Ridley’s The Myth of Basic Science, which makes the argument that technological progress is not driven by publicly funded scientific research (and presumably that we therefore don’t need it). I would like to focus on the claim that technology is akin to a living thing, and that because it is alive, it will inevitably progress whether basic science is funded or not.

Because that is bizarre.

For example, Ridley claims that:

technology is developing the kind of autonomy that hitherto characterized biological entities

No, it’s not.

Technology will find its inventors, rather than vice versa.

What does this even mean? What is the process by which this occurs? This really is starting to seem like personification taken way too literally.

By 2010, the Internet had roughly as many hyperlinks as the brain has synapses.

Rocks have many more atoms. Mycoplasma genitalium have many fewer genes. So what?

a significant proportion of the whispering in the cybersphere originates in programs […] rather than in people

None of that is occult, or beyond explanation, or even unexpected. Feeling mystical about programs you don’t understand doesn’t mean they’re anything like a living thing.

(Also, “cyber” — drink!)

Please, oh singularity, save us all from science writers and economists harping on about the “evolving living organism that is technium.”

Technology, even considered as a discrete entity, however you’d define it, is not alive. No, I don’t have a definition of “life.” You don’t either. But whatever it might be, it won’t include (a) rocks, (b) things made of rocks, (c) really intricate things made of rocks, or (d) abstract concepts.

Yes, I sometimes personify technology. No, that doesn’t mean I secretly think it’s alive.

Emergence

The concept Ridley is groping towards is that of emergence. Emergence happens when a system with simple rules and massive numbers of participants shows complex behaviour at a higher level. The behaviour of the system may be unpredictable and yet show little pockets of order (in short periods of time, or over short distances). Sometimes these pockets are ordered enough that we can model them with a new set of laws that have little to do with the microscopic ones… but we must always remember that we are still dealing with order emerging from chaos.

Board games, spots on a leopard, mathematics itself, Conway’s game of life, and the weather are all examples of emergence. So is the entire universe, since it’s made up of simple particles obeying simple rules, and yet shows every class of complex behaviour we know about, a lot of which we can simplify when we need to.

Life itself is an example of emergence, but here’s the important point: not all examples of emergence are alive.

Rubbish Review Debut: The Noontec N5 NAS

| Comments

I recently became the proud owner of a Noontec N5 network attached storage (NAS) enclosure. I bought it because I needed:

  1. Network access to the contents of a large hard drive.
  2. USB access to the contents of a large hard drive.

It’s hard to tell where to start with this amazing device, so let’s go with the all-important first impression. Nothing says factory quality control quite like a few dead cockroaches stuck to a random sticky pad inside the enclosure. From that point on, I knew I was in for a treat.

The cockroaches could not be removed. It’d probably void the warranty anyway.

Network setup

It assembled fine, so I powered it up and connected it to my network. It then insisted on hijacking my router’s IP address, acting as a DNS server, and generally screwing up my entire network. Seems reasonable. In order to access it I had to remove it from my network, connect a Linux box directly via ethernet, use ifconfig/route/etc to manually set up network access to it, and then configure it to not be monumentally stupid.

Easy as.

Then it was time to set up SMB. Seemed to go easy enough: my Mac machine could connect, my Windows 8 machine could connect, my Linux machine… not so much. I progressed through using smbclient, mount.cifs, and eventually even Wireshark to figure out what the problem was. You might think, “well, Linux has never been great at SMB, of course you need to do some work there.” But hold your judgement until you hear the problem: to authenticate SMB connections, the N5 uses NTLMv1. NTLMv1 has a number of terrific vulnerabilities that could be exploited by a 13 year old with a graphics calculator, so NTLMv2 was created in 1996 to address some of these issue. The N5 does not support NTLMv2. That is, the N5’s level of network security predates Internet Explorer v3.

No matter. I’ll just explicitly downgrade my security settings. Cool.

Side note: the N5’s web interface exposes all passwords in plain text. Super useful feature that.

During this process, by the way, I contacted Noontec for help. They have a website, of course — the support email address listed there is for another company and offer firmware downloads off Dropbox. Seems legit. When I contacted them via this address, they suggested I start by updating the firmware, and sent me a link to do so. The firmware completely changed the branding of the box (as reported by the web UI and network protocol responses). I initially worried about the potential for malware, but realised that even running a botnet off a NAS could only improve the functionality of the N5.

So now I can check off item one on my list, and all it took was manual network routing and byte-for-byte packet inspection. On to item two: USB access!