The Geek’s Reading List – Week of November 13th 2015

The Geek’s Reading List – Week of November 13th 2015


I have been part of the technology industry for a third of a century now. For 13 years I was an electronics designer and software developer: I designed early generation PCs, mobile phones (including cell phones) and a number of embedded systems which are still in use today. I then became a sell-side research analyst for the next 20 years, where I was ranked the #1 tech analyst in Canada for six consecutive years, named one of the best in the world, and won a number of awards for stock-picking and estimating.

I started writing the Geek’s Reading List about 12 years ago. In addition to the company specific research notes I was publishing almost every day, it was a weekly list of articles I found interesting – usually provocative, new, and counter-consensus. The sorts of things I wasn’t seeing being written anywhere else.

They were not intended, at the time, to be taken as investment advice, nor should they today. That being said, investors need to understand crucial trends and developments in the industries in which they invest. Therefore, I believe these comments may actually help investors with a longer time horizon. Not to mention they might come in handy for consumers, CEOs, IT managers … or just about anybody, come to think of it. Technology isn’t just a niche area of interest to geeks these days: it impacts almost every part of our economy. I guess, in a way, we are all geeks now. Or at least need to act like it some of the time!

Please feel free to pass this newsletter on. Of course, if you find any articles you think should be included please send them on to me. Or feel free to email me to discuss any of these topics in more depth: the sentence or two I write before each topic is usually only a fraction of my highly opinionated views on the subject!

This edition of the Geeks List, and all back issues, can be found at

Brian Piccioni

Click to Subscribe

1)          Belgium Tells Facebook to Stop Storing Personal Data From Non-Users

Facebook faces a challenge: its user base is probably approaching saturation in the developed world where it gets most of its revenue from. Based on this and other articles it appears the solution is to appropriate personal data from non-users. The legal position here is pretty straightforward: if you haven’t joined Facebook, the company can’t hide behind and all-encompassing EULA. Belgium is a small country and the fine itself is small relative to Facebook’s profits, however, the legal theory almost certainly applies to all developed countries, and probably many others as well. Time will tell if regulators – or even class action lawyers – will act on this egregious violation of privacy.

“Facebook Inc. lost a fight with Belgium’s privacy watchdog after a court ordered it to stop storing personal data from people who don’t have an account with the social network. Facebook faces a fine of 250,000 euros ($269,000) a day if it doesn’t comply with the ruling, the court said in an e-mailed statement Monday. Belgium’s privacy watchdog had sued Menlo Park, California-based Facebook for failing to respond to its demands to bring its privacy policy in line with local laws. Facebook’s “disrespectful” treatment of users’ personal data, without their knowledge, “needs tackling,” Willem Debeuckelaere, president of the Belgian commission, said in May.”

2)          Microsoft unveils German data plan to tackle US internet spying

The “Patriot Act” explicitly required US companies to provide US law enforcement with all data they held regardless of where they held it. The Snowden revelations showed that large US tech companies decided to go much further and offered their enthusiastic support of non-legal data collection, back doors, and so on. None of this is new and it almost certainly predates the 9/11 terror attacks used to justify it. Sadly, many companies are feeling the pinch and US firms are treated with the deep suspicion they deserve. Whether or not the data is located in Germany or Timbuktu you can rest assured NSA (and, by extension Chinese and Russian intelligence through their spies within NSA) has access to it either though the collaboration of domestic (i.e. German or Mali) intelligence or other backdoors.

“Microsoft threw down a challenge to the US tech industry on Wednesday as it came up with a radical new regime to try to protect the data of some of its biggest European customers from US government over-reach. The arrangement, which will ringfence European data with a new legal set-up designed put it beyond US courts and the country’s national security establishment, in one of the most drastic corporate responses yet to the American internet spying scandal. Technology analysts called it a “watershed moment”, describing the manoeuvre as the first time a major US tech group had accepted its inability to protect customer data from the US government. The plan exposed the flaws in other recent attempts by US tech companies to ease European fears by simply opening more data centres in Europe, because these were still exposed to US intrusion, they added.”

3)          Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine

Google is participating in the gold rush for cloud services. Chances are there will be a fairly small number of companies which will dominate the space and nobody will be in a position to dislodge those leaders due to their scale. In that light, open sourcing TensorFlow is not generosity since you need access to the underlying hardware and software platform to really make it work. No doubt any application developed with the technology will only work, or work well, on Google’s cloud platform.

“The Google Photos search engine isn’t perfect. But its accuracy is enormously impressive—so impressive that O’Reilly couldn’t understand why Google didn’t sell access to its AI engine via the Internet, cloud-computing style, letting others drive their apps with the same machine learning. That could be Google’s real money-maker, he said. After all, Google also uses this AI engine to recognize spoken words, translate from one language to another, improve Internet search results, and more. The rest of the world could turn this tech towards so many other tasks, from ad targeting to computer security. Well, this morning, Google took O’Reilly’s idea further than even he expected. It’s not selling access to its deep learning engine. It’s open sourcing that engine, freely sharing the underlying code with the world at large. This software is called TensorFlow, and in literally giving the technology away, Google believes it can accelerate the evolution of AI.”

4)          Autonomous cars aren’t nearly as clever as you think, says Toyota exec

This is a good sanity check to all the articles you read regarding how self-driving cars (or robots for that matter) are going to revolutionize the world as we know it, probably sometime in the next few years. Automotive technology involves big stakes and it takes a long time for any new system to become main-stream. As the article mentions, self-driving tests are done in carefully contrived circumstances for a reason: the systems are very limited and can only work within defined boundaries. Reality (bad maps, lousy GPS reception, extreme weather, and so on) tends to be a lot messier than the streets of Mountain View.

“Pratt explained what researchers already know but perhaps others don’t: Autonomous cars look great in controlled environments but soon fail when faced with tasks that human drivers find simple. Drivers, for example, can pretty much get behind the wheel of a car and drive it wherever it may be, he said. Autonomous vehicles use GPS and laser imaging sensors to figure out where they are by matching data against a complex map that goes beyond simple roads and includes details down to lane markings. The cars rely on all that data to drive, so they quickly hit problems in areas that haven’t been mapped in advance.”

5)          Mountain View: Google self-driving car pulled over for ‘driving too slowly’

It has to happen: a cop decided to pull over a self-driving car though it is not clear there was an offense. If there was a violation it raises the question of who is responsible: the manufacturer of the car or the passenger? Some traffic citations are pretty subjective, as is obstructing the flow of traffic so it is not always going to be cut and dried that there is fault. One thing with self-driving cars is that their systems are bound to keep an accurate record of every part of their journey so evidence should not be hard to come by.

“When one of Google’s self-driving vehicles is pulled over, who gets the ticket? The passenger or the car? The question was asked across the Internet on Thursday, after a police officer stopped one of the gumball-machine-shaped vehicles on El Camino Real. In a blog post, the Mountain View Police Department said the officer noticed traffic backing up behind a slow-moving car in the eastbound No. 3 lane, near Rengstorff Avenue. The vehicle was traveling at 24 mph in a 35 mph zone.”

6)          Cars Talk to Cars on the Autobahn

One automotive technology which would be pretty quick to implement is Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communications. The idea here is that, for example, the car up ahead – even if out of sight – must be slowing down or taking evasive action. Rather than relying on vision – even machine vision – to deal with an obstacle, pothole, etc., the cars behind could be informed of what is going on through a shaking steering wheel or warning tone. V2I is similar, except that speed limits, construction zones, or even icy conditions might be signaled to the car or even enforced through its control systems.

“When my car can talk to your car, and yours can talk to the next, they’ll all be able to substitute shared data for the one thing robots lack: intuition. Such talkativeness will allow cars to space themselves out, make room for merging vehicles, and vary their speed without setting off bothersome ripple effects in the traffic behind. Best of all, it will let robocars drive (or appear to drive) a bit more boldly: A well-informed car can pass a semi with such confidence that a human observer might almost mistake it for foolhardiness.”

7)          Cord-nevers could be bigger threat to TV than cord-cutters

My generation grew up in the habit of gathering around a TV and watching whatever was on. Some families (not mine) had two TVs and lucky kids got their own TV. That situation has changed over the past 5 years or so: young folk with access to broadband Internet watch TV or movies on their computers and never developed the habit of sitting around the family TV, except perhaps during special events. This is probably a behavior which is permanent and not exclusively associated with the youth however shifting demographics suggests it will be a bigger part of viewing behavior. The challenge broadcasters face is that there is more and more choice and this is especially the case for cable networks who, with a few exceptions, have concentrated the lowest common denominator. Ultimately the power will shift from the traditional broadcast to the content producer as a result.

“Viewers continue to flee traditional television. Canada’s top TV providers have lost almost seven times more customers so far this year compared with the same period in 2014, according to research from the consulting firm Boon Dog Professional Services. The Ottawa-based company looked at Canada’s seven publicly traded TV providers, including Rogers, Bell and Shaw. It found that, in total, they lost 153,000 subscribers up to this point in their 2015 fiscal year. Boon Dog estimates about 11.4 million people still watch TV via conventional means such as cable, so cord-cutting numbers remain a drop in the bucket, for now. But the industry may be in bigger trouble than we think. There are other looming threats. The first is the growing number of cord-nevers — people who have never subscribed to traditional television. There’s also the cordless-contemplators — people determined to cut the cord, but who haven’t quite pulled the plug yet.”

8)          Dynamic adverts for the 21st century

Putting product placement into a video stream is a lot more complicated than it sounds. A football field is a relatively static and defined thing whereas a TV or movie set is a lot more complicated and the camera angles are likely to be more variable. A simple “green screen” approach probably won’t work so you need some pretty heavy video processing to replace an object on the fly unless it happens to be in the background like a name plate or something. Nevertheless, product placement, like those pop up ads you see all the time now, is nothing something you can ignore or skip through with a PVR so this is the way things are headed.

“This form of advertising, known as product placement, can be quite subtle. A character may open a fridge, for example, and you as a viewer don’t realize that different companies have paid for the various items that appear on the shelves. Or a character may be walking down a street, and you don’t register that someone has paid for a particular advertising poster to be mounted on the wall. And when the character gets into a car… well, I think you get my drift. The reason I’m waffling on about this here is that I’m currently visiting the UK and I’ve just seen the most amazing demonstration of next-generation advertising technology. This is similar to the yellow and blue lines being superimposed on an American football game on TV, except that it involves integrating products into TV shows and movies.”

9)          Magic Quadrant or Gartner ‘Graft’?

“Graft” is such a strong term. Gartner and all other industry researchers know what side of the bread the butter is on: most of their major subscribers are industry people and, unsurprisingly, most of the coverage and bullishness is associated with the companies who cooperate with, and buy, industry research. Nobody in an industry wants to hear that industry is dying so the coverage is almost always bullish and almost always wrong. It is understandable that small companies feel left out because they don’t get the accolades the big companies have bought and paid for but I rather doubt a lawsuit is the answer. Thanks to Nick Tang for this item.

“”If you are a major client, you get time with the Gartner analysts. But we are a very small client. The ombudsman told us we should increase our efforts to engage with the analysts but those analysts then declined our offer of a briefing at the TM Forum Live event in Nice this year [the major annual industry event for the revenue and customer management market]. They agreed to meet in the end, but only showed up for ten minutes. We wouldn’t say that the process is corrupt — I don’t think you can just buy your way into the quadrant — but we would say the process is very flawed. The way they collect the information is flawed — our biggest customers are in China but the Gartner analysts don’t engage with them in their language to collect information and anecdotes … Gartner denied a request from us to have their questionnaire translated into Chinese. And even though there is a review process of the information in the report before it’s published, there are still factual inaccuracies published.””

10)      A decade into a project to digitize U.S. immigration forms, just 1 is online

This story is one of many which illustrates how large government (and even non-profit) organizations are milked by large IT consulting firms. It is not abundantly obvious why any government needs to hire IBM, HP, CGI or any other consulting firm to mismanage a project most high school computer science students would be able to do in their spare time. The thing is, the large consultancies have learned how to take advantage of the ineptitude of government managers to line their own pockets. The incredible thing is despite debacle after debacle, the exact same mistakes are made, there are no penalties, and the same firms are allowed to bid on the contracts.

“Heaving under mountains of paperwork, the government has spent more than $1 billion trying to replace its antiquated approach to managing immigration with a system of digitized records, online applications and a full suite of nearly 100 electronic forms. A decade in, all that officials have to show for the effort is a single form that’s now available for online applications and a single type of fee that immigrants pay electronically. The 94 other forms can be filed only with paper.”

11)      Benchmarks put iPad Pro’s A9X chip roughly on par with Intel’s 2013 Core i5

I’ve remarked in the past about how Apple’s marketing power results in fawning coverage of the company’s products. I must have seen 20 or 30 articles about the iPad Pro (a somewhat larger version of the company’s tablet) and all heaped slavish praise on it. This one stood out for the same reason articles praising iPhone camera’s stand out: people should not write about things they do not fundamentally understand. Whatever the merits of Apple’s latest ARM processor, benchmarks have to be understood within context. Comparing benchmark results on an iPad, which has a rudimentary, purpose-built operating system with a Macbook or Surface Pro, which have a fully evolved, general purpose operating system, is a test of the operating system, not the processor.

“As you can see, the A9X scored 3,233 on Geekbench’s single-core tests versus 1,831 for the iPad Air 2 and 2,537 for the iPhone 6S. “The A9X can’t quite get up to the level of a modern U-series Core i5 based on Broadwell or Skylake, but it’s roughly on the same level as a Core i5 from 2013 or so and it’s well ahead of Core M,” Ars writes. As for the graphics, the OpenGL version of the GFXBench benchmark shows the A9X beating not only all previous iOS devices but also Intel’s Iris Pro 5200 graphics powering the 15-inch MacBook Pro and the Intel HD 520 in the Surface Pro 4.”

12)      Robots threaten 15m UK jobs, says Bank of England’s chief economist

I wish I had access to web pages from the early 1900s so I could quote economists talking about how horseless carriages would cost millions of blacksmith jobs, or how steam tractors would decimate employment in the agricultural sector. To start with, we have pretty few actual white collar robots, and use of robots has been pervasive on the factory floor since the development of the assembly line. It all depends on what you call a robot: a combine is a robot agricultural worker and a numerically controlled milling machine is a robot mill operator. It takes time for the technology to be developed and deployed. Economies adapt. Get over it.

“The Bank of England has warned that up to 15m jobs in Britain are at risk of being lost to an age of robots where increasingly sophisticated machines do work that was previously the preserve of humans. Andy Haldane, the bank’s chief economist, said automation posed a risk to almost half those employed in the UK and that a “third machine age” would hollow out the labour market, widening the gap between rich and poor. The results of a Bank of England study, Haldane added, suggested that administrative, clerical and production tasks were most at threat. In a speech to the umbrella organisation for Britain’s trade unions, the TUC, he asked if the Luddites – reputed to have smashed machines during the Industrial Revolution – had been proved right two centuries on.”

13)      It’s way too Easy to Hack the Hospital

We hear a lot about the Internet of Things (IoT) which is just hooking stuff up to computers that you don’t think of as being hooked up to computers. None of this is really new: factories and hospitals have been doing this for years. As this article demonstrates, a problem arises when neither people who neither know nor care about computer security start hooking things up to the network. They not only expose the gizmo to hackers (in this case pretty much anybody who wants to gain access) they open their networks to hackers as well. Sorry about the stupid layout and annoying animations on this page: somebody needs to take Bloomberg’s web editor out back for summary execution. Thanks to my friend Humphrey Brown for this article.

“Like the printers, copiers, and office telephones used across all industries, many medical devices today are networked, running standard operating systems and living on the Internet just as laptops and smartphones do. Like the rest of the Internet of Things—devices that range from cars to garden sprinklers—they communicate with servers, and many can be controlled remotely. As quickly became apparent to Rios and the others, hospital administrators have a lot of reasons to fear hackers. For a full week, the group spent their days looking for backdoors into magnetic resonance imaging scanners, ultrasound equipment, ventilators, electroconvulsive therapy machines, and dozens of other contraptions. The teams gathered each evening inside the hospital to trade casualty reports. “Every day, it was like every device on the menu got crushed,” Rios says. “It was all bad. Really, really bad.” The teams didn’t have time to dive deeply into the vulnerabilities they found, partly because they found so many—defenseless operating systems, generic passwords that couldn’t be changed, and so on.”

14)      Germany Proposes Drone Registration

The major difference between little drones and big drones is cost and the fact that a big drone can do a lot of damage and injury. Unfortunately, all you need is a credit card to get a large, heavy drone, and as would be expected that means a small number of people are doing very stupid things with them. The US, Ireland, and now Germany have announced plans to license drones based on their weight class, which makes some sense since the bigger they are the more dangerous they are. They should also require all drones to respect “keep out” areas and probably broadcast a ID as well.

“Anyone else see a trend brewing? The United States and Ireland both announced plans to implement a drone registration system, and now Germany is heading in that same direction. Citing a lack of regulations and growing public safety concerns, the German Federal Ministry of Transport and Digital Infrastructure proposed all drones – commercial and hobbyist – that weigh more than 0.5 kg (1.1 lb) need to have a license plate that could help identify the operator. You can read the proposal here (text reads in German, so you might need to translate).”

15)      Audi to use 3D printed metal parts in production cars

Unfortunately, this article is pretty short on details and it actually suggests they might use 3D printing in production cars. Production cars in the context of Audi doesn’t necessarily mean high volume cars either since some Audis are extremely expensive, low volume, albeit production, cars. It will be a long time before 3D printing can come anywhere near the cost of mass production casting and machining. It is true that you can do things with 3D printing which you can’t do with casting and machining but this just means they won’t do those things in high volume products.

“Staying true to their slogan Vorsprung durch Technik (Advancement through Technology), German carmaker Audi is reportedly experimenting with making complex parts out of metal 3D printing technology, and plans to eventually fit them to production cars. While the use of 3D printing in auto manufacturing is not new—we recently reported on Ford’s use of additive manufacturing techniques to produce prototypes for some of its vehicles—advancements in the highly coveted area of metal 3D printing mean that in the near future, big-name car companies will be able to use 3D printed pieces as end-use parts. In particular, the metal 3D printing process Audi has been experimenting with is ideal for manufacturing geometrically complex parts that would be time-consuming and expensive to produce through traditional means such as casting. The 3D printed components, made from a fine metallic powder comprised of steel and aluminium beads less than half the thickness of human hair, are also denser than cast items.”

16)      BlackBerry Priv review: Android fixes the OS, but the hardware can’t compete

There was some interest when Blackberry announced it would come out with an Android phone but a lot of that fizzled once the price point was announced. You can’t expect to charge premium prices when your market share is a rounding error. Even less so when you can’t be bothered to roll out hardware with a recent version of Android – even my 2 year old Nexus 5 runs Android 6. This review is pretty brutal, but also spot on. Blackberry is in a death spiral and it is just a matter of time before they announce a “strategic review”.

“With the Priv, the company finally joins the mobile operating system duopoly by jumping into bed with the only major app ecosystem available to third parties: Android. The Priv runs an old version of Android: 5.1.1 Lollipop, the first of many disappointments the Priv will throw our way. Being a BlackBerry, the Priv of course has a hardware keyboard, but the keyboard isn’t any good. It’s so flat and tiny that it’s awful to type on; we greatly preferred the packed-in software keyboard. Still, the biggest disappointment is the price: a whopping $700. It’s not an unheard of sum for a mobile phone, but build quality issues and a long list of compromises just aren’t worth $700.”

17)      FCC revises guidelines, swears it will not ban third-party router firmware

The FCC caused a great deal of excitement in the enthusiast community when it announced proposed rules which would limit installation of 3rd party software on wireless routers. The idea is not necessarily a bad one: the FCC is charged with maintaining the integrity of radio spectrum and the radio behavior of a WiFi router can be significantly impacted by the software it is running. Since 3rd party software isn’t vetted in any way bad code can lead to havoc. The language is now softened but the problem remains.

“A few months ago, the FCC issued a set of security requirements meant to ensure that routers stayed within their assigned spectrum bands and didn’t cause problems for other hardware operating nearby. The rules caused significant concern in the router modding and security communities, however, because they specifically called out DD-WRT as a software package that should be blocked from install, and required manufacturers to submit an action plan detailing how they would prevent the use of unauthorized firmware. While the FCC’s goal — preventing unauthorized spectrum usage — wasn’t something many people had a problem with, the fear was that manufacturers would take the opportunity to kill third-party firmware support altogether, rather than trying to sandbox spectrum adjustments to meet FCC guidelines.”

18)      Shocking new way to get the salt out

I can’t say I understand how this technique works but this article and the associated coverage seem to suggest is has the potential for being an energy efficient method to desalinate water. This could be significant, provided the technology proves to be durable and can be scaled up, which the article seems to suggest is achievable. One problem unfortunately is that much of the need for desalination is in the developing world where access to electricity is often a challenge.

“As the availability of clean, potable water becomes an increasingly urgent issue in many parts of the world, researchers are searching for new ways to treat salty, brackish or contaminated water to make it usable. Now a team at MIT has come up with an innovative approach that, unlike most traditional desalination systems, does not separate ions or water molecules with filters, which can become clogged, or boiling, which consumes great amounts of energy. Instead, the system uses an electrically driven shockwave within a stream of flowing water, which pushes salty water to one side of the flow and fresh water to the other, allowing easy separation of the two streams. The new approach is described in the journal Environmental Science and Technology Letters, in a paper by professor of chemical engineering and mathematics Martin Bazant, graduate student Sven Schlumpberger, undergraduate Nancy Lu, and former postdoc Matthew Suss.”

19)      How This Battery Cut Microsoft Datacenter Costs By A Quarter

Traditional data centers use the same sort of battery back-up scheme used in the telephone world: banks of big, heavy, but cheap, lead acid batteries. If you have a UPS (uninterruptible power supply) for your PC you have a similar arrangement, except the battery is a somewhat smaller but more expensive sealed lead acid battery). This arrangement might have made sense in the olden days but power failures today are typically much shorter in duration and many data centers have a backup generator which kicks in a few moments after power is lost. What Microsoft has done is essentially insert a small, albeit appropriately sized battery into its power supply design which allows the power supply and whatever it is powering to bridge the short term loss of power. According to the article, the net result of this small change filters through the datacenter and becomes quite significant.

“The new power supply, which Microsoft calls the Local Energy Storage (LES) unit, was designed as part of the Open Cloud Server hyperscale system that the company donated to the Open Compute Project last year and updated last October with some significant tweaks. In the spirit of openness that might seem a bit strange coming from Microsoft, the new LES specification is being opened up through the Open Compute community as well. It is significant to note that the Open Compute designs put forth by Facebook three years ago had already moved batteries into the Open Rack design to gain efficiencies. And Google said way back in April 2009, in a rare look at its internal datacenters, that it had not only been using containerized datacenters to boost efficiency since 2005, but had put 12 volt battery packs on its servers so they could ride out failures on local, rather than centralized, stored power. That was a decade ago, just to show you how far ahead Google can sometimes be compared to its rivals. Supermicro and others offer power supplies with battery backups built in, too.”

20)      520-million-year-old arthropod brains turn paleontology on its head

The fossils of the Burgess Shale include some pretty complex animals, including what are almost certainly active predators. An active predator, unlike, say, a jellyfish, requires some intelligence to hunt down its prey which then brings up the question of how complex a brain could be so soon after the emergence of the first complex animals about 620 to 550 million years ago. It turns out that brains can fossilize and now scientists have multiple examples of the brains of an early arthropod and they were pretty complex indeed – almost as much as a modern crustacean. Now, shrimp – even big shrimp – aren’t exactly the sharpest knives in the drawer, but it is pretty remarkable that brains got that complicated that quickly.

“Science has long dictated that brains don’t fossilize, so when Nicholas Strausfeld co-authored the first ever report of a fossilized brain in a 2012 edition of Nature, it was met with “a lot of flack.” “It was questioned by many paleontologists, who thought – and in fact some claimed in print – that maybe it was just an artifact or a one-off, implausible fossilization event,” said Strausfeld, a Regents’ professor in UA’s Department of Neuroscience. His latest paper in Current Biology addresses these doubts head-on, with definitive evidence that, indeed, brains do fossilize. In the paper, Strausfeld and his collaborators, including Xiaoya Ma of Yunnan Key Laboratory for Palaeobiology at China’s Yunnan University and Gregory Edgecombe of the Natural History Museum in London, analyze seven newly discovered fossils of the same species to find, in each, traces of what was undoubtedly a brain.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s