Apparently, Starbucks in China doesn’t want you to save a copy of the wi-fi Terms of Service you agree to — it’s available only as a PNG image file, embedded in a vertically scrollable cell in an HTML table layout:

You can’t copy the text to your clipboard, or save it somewhere (say, for comparing with later versions to see if their Terms of Service changed). And what do you do if you have poor or no vision and rely on a screenreader?

Is it even legally binding (in China) to agree to an online contract that isn’t represented as text? Or is that a distinction programmers would make but lawyers never would?

See the top-level portal HTML page for the full glory. This is from the Starbucks on the outdoor second level of the Hai An Cheng Mall in Shenzhen, on 1 June 2016.

Unrelatedly, in the page source, note the “var temp = url.length;” and the subsequent failure to actually use the temp variable in the loop control or anywhere else.

I’m not sure which bothers me more, the unparseable ToS text or the sloppy coding. Okay, that’s not true — I am sure: the unparseable ToS text. This is supposed to be a contract, but only one side actually has the text. Come on, Starbucks. If the issue is worries about the Chinese characters displaying correctly in all browsers, then present the PNG image for display but still provide the text as an underlay, so that it can be saved as text.

Here’s the full ToS image:

Starbucks Shenzhen (Hai An Cheng) Wi-Fi Terms of Service, as of 2016-06-01

My article Dissecting The Myth That Open Source Software Is Not Commercial is now up at the IEEE Software Blog. (Comments over there, please, not here.)

It’s gotten a surprising amount of Twitter activity, which is pleasing. The article’s message is one I’d like to see spread widely!

Many thanks to editor Stefano Zacchiroli for editing, and for suggesting an article in the first place.

If you encountered this error when trying to clone the Redis repository from GitHub recently, there is a solution. The error looks like this:

  $ git clone
  Cloning into 'redis'...
  remote: Counting objects: 42713, done.        
  remote: Compressing objects: 100% (33/33), done.        
  remote: Total 42713 (delta 15), reused 0 (delta 0), pack-reused 42680        
  Receiving objects: 100% (42713/42713), 19.29 MiB | 6.81 MiB/s, done.
  error: object 1f9ef1b6556b375d56767fd78bf06c7d90e9abea: \
  zeroPaddedFilemode: contains zero-padded file modes
  fatal: Error in object
  fatal: index-pack failed

The problem is that your ~/.gitconfig file probably has this setting:

          fsckObjects = true

…and/or perhaps these settings:

          fsckobjects = true
          fsckObjects = true 

Solution: set the value(s) to false while you clone Redis, then set them back to true afterwards.

See also this discussion for more; that’s where I originally stumbled across the solution. I’ve now cross-linked between this post and a ticket in the Redis issue tracker.

Pro non-tip: you might think that running

  $ git config --global fsck.zeroPaddedFilemode ignore

so as to get

  	zeroPaddedFilemode = ignore

in your .gitconfig would solve this problem in a nice targeted way, but it won’t, so don’t bother. See here for some discussion about that.

(This post is part of my SOLVED as a Service series, in which I post solutions to technical problems with open source software that I use. The point is the next time I encounter the same problem and do an Internet search, my own post will come up; this has now actually happened several times. If these posts help others, that’s a bonus.)

Wow. I had no idea this could happen!

(Rest of this post is by Michael Albaugh, except for the parts that quote me.)

From: Michael Albaugh
Subject: Re: Wait, what?  Can speakers pick up radio by themselves?
To: Karl Fogel
Cc: The Usual Suspects
Date: Fri, 11 Dec 2015 10:03:22 -0800

Disclaimer: It has been quite a while since I had to deal with this stuff for pay, and my amateur license expired so long ago they recycled my call.

On Dec 11, 2015, at 9:13 AM, Karl Fogel wrote:

This is happening, this is literally happening right now:

I have plugged my phone headset (which double as my desk headphones) into my computer speakers. This a standard pair of small standalone computer speakers, one of which plugs into the computer’s sound port with a standard 2.5mm connector, and the other speaker connected to the first. The first speaker also has a headset jack and a volume control on the front.

It presumably also has a power supply. That is, these are amplified speakers.

With my headset plugged into that speaker’s jack, and the speaker volume turned all the way down, I can hear a radio station playing in the headset, faintly and with some staticy fuzz, but clearly. I don’t know which station it is, but sometimes the pop music stops and an announcer comes on (I can’t quite hear what he is saying, though I might be able to catch it next time he comes on).

This is not surprising. What you have is some consumer-grade cables (i.e. not particularly designed to reduce the reception of stray signals at all cost, or any cost) plugged into a device with some non-linear components (inherently, such as transistor and diodes, or unintentionally, such as inductors with other than air cores) and including a means to amplify the result. That is, you have a crystal radio hiding in your amplifier.

See also “Why do I get the local radio station on my fillings?”

However, if I turn the volume knob on the speaker up at all, then the station fades out and I get silence.

Or, you have shifted the sum of the intended input and the signal that being “detected” out of the range of the non-linearity.

If I unplug the speakers from the computer, then I don’t hear the station anymore.

Here I am leaning more on speculation, but perhaps the speakers are sensing the (lack of) DC bias on their input and shutting down the output.

So my… computer is acting like a radio?

Actually, I suspect that your speakers are. You should immediately rush out and buy various models of Bose, Harmon Kardon, and Beats by Apple speakers and repeats the experiment. 🙂

Why? And why is it only audible when the speaker’s volume is turned down?

See above.

In related news, perhaps you missed the hack that was in the news a short while back. If you have your Siri, Google, or Cortana “assistant” enabled to work without pressing anything, and you have a wired handsfree header plugged into your phone, then someone can inject audio into your phone and say “Siri, post all my photos to Instagram”. or “Siri, find goat porn”.


In older news, back when phones were always wired, heavy enough to be a murder weapon, radio stations that didn’t want their “personalities” to have to drive out to a shack in the marshes would lease lines from the phone company, running from their handsomely appointed studios to that shack. These lines would run through one or more phone company facilities. In one such facility (cough — [[redacted]] — cough) some of the workers had connected a speaker across the line as it went through, so they could have music in their workplace. One day, a worker experienced one of those WTF moments, and verbalized the feeling. Of course, every speaker is a microphone, and the exclamation was sent out over the air, causing a fair bit of consternation, agitated phone calls, and denials from the on-air host. Not to mention a mad scramble to disconnect that speaker and look innocent.

Welcome to the future, here’s your whoopee Cushion


Update 2015-12-03: I just found out from a response tweet from @jacobian that the user flogging is apparently a requirement of the PCI standards, and thus many online services are essentially forced into it. Would love feedback or further information from anyone familiar with how PCI standards get baked.

Calling all designers of online systems that do user authentication… Wait, that could be shorter:

Calling all designers of online systems:

Please stop locking out users after three failed login attempts.

That security measure is left over from the days of Unix consoles that were just dumb terminals connected to a server somewhere else in the building. It makes less and less sense in the modern era. These days, large distributed botnets are engaged in constant automated login attempts against all publicly reachable online services of any size, using guessed username/password combinations, on the principle that only a tiny fraction of the attempts need to succeed for the effort to be worthwhile. The result is that users with strong passwords but human-readable usernames are penalized for being the target of failed hacking attempts.

It happened to me recently:

From: Karl Fogel
To: Mailing List Of Various Techie Friends
Subject: Speaking of passwords

I just found out from a rep that the reason Wells Fargo Bank kept
resetting my (incredibly secure) online access password, thus
forcing me to do a password reset dance about twice a month, is that
online accounts get automatically locked after three failed login
attempts.  Since my username was "karlfogel" -- it's changed to
something less guessable now -- some jerk with a botnet was causing
Wells Fargo to lock me out on a regular basis, presumably by trying
a username generated from my real name and passwords that were
various combinations of my birthday, relative's names, etc.  The
same is probably happening to thousands of other customers.  After
all, the hackers only need a tiny number of successes.

I wonder if Wells Fargo has really thought carefully about the
usefulness of a 3-failures lockout policy in the modern era of
distributed attacks against your entire user base.  This was not a
topic I felt it profitable to take up with the phone rep, though.
*cough cough*.

Every time you force your users to do a password reset dance, which usually involves some kind of email confirmation step, you are decreasing their security. First, because if a user is forced to change her password frequently, she is likely to start making passwords that are easier to remember, because why invest in memorizing a hard password if one is just going to have to reset it soon anyway? Second, and more importantly, because you are giving hackers the power to lock someone out of their own online account, which creates two vulnerabilities: one, now the hacker has an additional attack surface (the user’s email account), and two, your user support staff also becomes an attack surface because the hacker can now call up and impersonate the legitimate user, saying “Help, I’m locked out of my account” — a fact that the support rep can easily confirm, and which will lend credibility to the hacker’s attempt at social engineering.

Just as a general principle, it’s usually not good to allow attackers to change the behavior of the system for legitimate users. When you allow that, you give the hackers more material to work with, and they will always be more imaginative than your programmers or your support reps, because once they sense that they have a good target, they can spend all day thinking about how to approach it.

It’s fine to have a delay between login attempts. Maybe it’s even okay to increase the delay somewhat when there are a suspicious number of failed login attempts for a given user (although I’m not sure about that, and it is a minor violation of the general principle above).

If you want to help users who have weak passwords, have your security team run guess-in attempts itself (without the rate-limiting), or even run cracking attempts against the password database itself, and then follow up with the users whose passwords fail the test. You can just let them know the next time they log in, or if you want to provide especially deluxe service, follow up via an automated phone call or something. Don’t let them know by email, though: it’s not a great idea to send cleartext email across the Internet telling someone their password is insecure.

But don’t treat failed login attempts as special events that need some kind of reaction. They are more like spam: inevitable, ubiquitous, and best handled in ways that have no effect on the target. It’s not your users’ fault that people are trying to hack into their accounts, so don’t punish them for it.

Agree? Feel free to retweet #EndLoginFailureLockout, or redent.

Addendum: One of my friends on that mailing list followed up with this story:

Anthem Blue Cross, in order to let you make online payments, redirects
you to a random payment processing with a scammy-sounding domain name,
which tells you the following:

1. You need to make a new account with us because we're not tied into
Anthem's database. Because, you know, I can personally take credit
cards - but that's apparently beyond the capability of the largest
health insurance provider in the country.

2. By the way, you might already have an account with us from some
other place, so you'll have to log in with that account instead. No,
we can't tell you whether that's the case or not.

3. You must choose a password between 5 and 8 characters.

I'm not kidding. 5-8 characters.

I make my payments over the phone.

Software Freedom Conservancy logo

Update 2015-12-01: How could I have forgotten to mention that there’s a donor match going on right now? If you become one of the next 50 new Conservancy supporters, a donor is matching up to $6000! Please help Conservancy get every dollar they can from this generous donor.

Few organizations are as effective per dollar as the Software Freedom Conservancy.

The list of what they’ve done in 2015 alone is impressive — and that’s before you consider how small a staff they do it with.

You’ll notice that link was actually to their 2015 fundraiser page, which explains why they need to raise money now, and exactly what they plan to do with it. (Did I mention high marks for transparency?)

Today, for Giving Tuesday, I became a Conservancy Supporter again, and hope you’ll consider doing the same. The free software movement doesn’t run on good will. It runs on dedicated people giving their all, and those who do it full-time need support from everyone who understands why this movement is important.

If you’re looking to retweet, try this one, or redent here.

I’ll keep this short, because the very best thing you can do right now is go watch this 18-minute video of Nina Paley giving a talk at TEDxMaastricht about exactly why she is a copyright abolitionist and how copyright abolition starts at home, especially for artists. It is by far the best, most eloquent explanation I’ve seen yet of the harm copyright causes to artists and audiences and how liberation is possible:

If you’re one of the “copy-curious” — people who feel something is wrong with the current copyright system, but who worry about abandoning it wholesale because “how will artists make a living” and other similar questions the intellectual monopoly industry wants circling around in your head — then this talk is for you.

It’s less than 20 minutes. You will be mesmerized. And, like Nina’s audience at the talk, you will come out of it truly understanding the copyright abolition position and why an artist of Nina Paley’s caliber holds it.

Watch it.

Link to it:

Retweet it or redent it.

Please share widely!

I got a treeware letter recently from Experian explaining how one of their servers had been hacked and how my private data (name, address, Social Security number, phone number, birth date, etc) was likely obtained by criminal resellers. The letter was a little more euphemistic than that, but that’s basically what Experian was admitting. To make up for this incident, they were offering me a free two-year membership in their “ProtectMyID elite credit monitoring and identity theft resolution services”.

Now, one might, in these circumstances, ask oneself “Why would I want to enroll in an identity protection service offered by the very company that just admitted they compromised my identity when their server got hacked?”

Fortunately, their own FAQ addresses this question forthrightly:

Q: Since Experian was compromised; can it effectively offer credit monitoring?

A: Absolutely. This was an isolated incident of one server and one client’s data. The consumer credit bureau was not accessed in this incident and no other clients’ data was involved.

Well, that makes the decision easy. I don’t blame them for getting hacked — that could happen to anyone. But no way am I trusting my private data to people who use a semicolon where they should use a comma!

On a private mailing list, a friend recently asked this:

Playing devil’s advocate here: what privacy are you trying to protect? Is it very important to you that websites not know what sort of products you’re interested in (and if so, why)? Or is it that you simply find targeted ads annoying?

I ask as someone who spent four years trying to help websites show less annoying ads.

Below is my response (after someone else on the list said “Sorely tempted to exfiltrate the hell out of this. Can we have it on a web page please?”):

I think Eben Moglen’s observation that privacy is really an ecological concept, not a transactional one, is the best answer to this. Thinking of privacy primarily in terms of the relationship between the user and various commercial third-parties misses the point. This post gives the relevant passage from Eben (it’s not long, and there’s a link to his full talk):

He has also pointed out that these days it’s an explicit goal of the U.S. government to have and maintain the social graph of everyone. That is, all the relationships, to the highest degree of accuracy and resolution possible. So the information Google and other online services collect is now potential data for that graph. It’s already both subpoena’d at some times and surreptitiously exfiltrated at others (though Google has done admirable work trying to prevent the second; how successful that has been, we can’t know, but it probably has had some limiting effect).

My point is: all that data we’re collecting, once it exists, it’s valuable to more parties than the ones who originally collected it. And by the Ashley Madison Principle, there’s no such thing a confidential dataset. There are only datasets that have not yet been involuntarily shared, and those which have been. There is no guarantee you will be able to tell which category your particular dataset falls into.

So when you ask “Is it very important to you that websites not know what sort of products you’re interested in?”, you’re framing an ecological question in a transactional way. This unintentionally transforms the question from the one we should care about to the one collectors of large-scale data would prefer we ask :-).

I realize, of course, that there is a tradeoff here. Google really can improve the quality of ads — quality as seen not just from the advertizer’s point of view, but even from the user’s point of view — by tracking and analyzing everything everyone does. The benefits are near-term and (for Google and the advertizers) centralized; the costs are long-term and decentralized. But that doesn’t mean the costs aren’t significant. It’s very similar to the economics of a lot of environmental pollution, actually, which is partly why “ecological” is such a good word here. I think in some ways it’s almost the definition of an ecosystem to say it is a system from which short-term, easily measurable benefits can be extracted for particular members at long-term, hard-to-measure (but real) costs for all members. Privacy turns out to be such a system.

Does that help?