Sunday, 24 August 2014

Astronauts find living organisms clinging to the International Space Station, and aren’t sure how they got there

Diatoms


During a spacewalk intended to clean the International Space Station, Russian astronauts took samples from the exterior of the station for a routine analysis. The results of the experiment were quite surprising. Astronauts expected to find nothing more than contaminants created by the engines of incoming and outgoing spacecraft, but instead found that living organisms were clinging to outside of the ISS. The astronauts identified the organisms as sea plankton that likely originated from Earth, but the team couldn’t find a concrete explanation as to how these organisms made it all the way up to the space station — or how they managed to survive.
A colorized scanning electron micrograph of a tardigrade. Yes, they look amazing.
A colorized scanning electron micrograph of a tardigrade. Yes, they look amazing.
Though NASA has so far been unable to confirm whether or not the Russians truly did discover sea plankton clinging to the exterior of the station, there is some precedent for certain creatures being able to survive the vacuum of space. Tardigrades, water-dwelling microscopic invertebrates, are known to be able to survive a host of harsh environments. They can survive extreme temperatures (slightly above absolute zero to far above boiling), amounts of radiation hundreds of times higher than the lethal dose for a human, pressure around six times more than found in the deepest parts of the ocean, and the vacuum of space. The organisms found on the ISS aren’t tardigrades, but the little invertebrates show that some living organisms from Earth can indeed survive the harshness of space.
The bigger mystery is not that the plankton survived, but how they made it all the way up there, 205 miles above Earth. The scientists have already dismissed the possibility that the plankton were simply carried there on a spacecraft from Earth, as the plankton aren’t from the region where any ISS module or craft would’ve taken off. The working theory is that atmospheric currents could be scooping up the organisms then carrying them all the way to the space station, though that would mean the currents could travel an astonishing 205 miles (330 km) above the planet.
The International Space Station
The International Space Station
Living organisms have been found far above Earth before, such as microbes and bacterial life discovered 10 and 24.8 miles, respectively, into the atmosphere — though those numbers are a far cry from 205 miles.
For now, we’ll have to wait to see if the Russian team confirms the findings with NASA. Then, maybe the two factions can work together in order to figure out how plankton made it all the way up into space, and perhaps even discover exactly why the plankton can survive. The organisms aren’t alien life, but they did pose another fascinating mystery.

Rejoice, commuters and workers! The Chairless Chair exoskeleton lets you sit down anywhere, anytime

Chairless Chair: Sitting without a chair, production line worker

The Chairless Chair is an exoskeleton that attaches to your legs and lets you sit back, as if you’re sitting on a chair — but, in the words of Morpheus, there is no chair. Rather than relying on the tensile strength of solid wood or plastic and the gravity-defying nature of three or more supporting legs, the Chairless Chair uses clever battery-powered mechanics to achieve the same effect. When turned off, the Chairless Chair allows you to walk around normally. Early versions of the Chair are fairly chunky — you’ll certainly look mechatronic – but presumably future versions might be slim enough to hide innocuously beneath your pants. Suffice it to say that there are thousands of situations where working conditions could be improved by the Chairless Chair — and lots of outdoor activities, too.
The Chairless Chair exoskeleton, developed by Swiss startup Noonee, is essentially a clever application of mechanics. The device attaches to your hips and shoes, and straps to both your thighs and calves. When activated, a damper takes the load from your thighs/ass and funnels it into the heels of your shoes. When turned, you can apparently walk around as normal. Exact details of the implementation are lacking, but it sounds like a 6-volt battery is used to vary the resistance of the damper, to offer a variety of sitting positions. The latest prototype, fashioned out of aluminium and carbon fiber, weighs around 2 kilograms (4.4 pounds) and is good for around 24 hours of use (no word on whether that’s 24 hours of sitting, though).
Chairless Chair, sitting in an office or something
Chairless Chair, sitting in a laboratory or something like that
As you can probably imagine, the Chairless Chair could revolutionize everything from production line work to standing on a crowded train to camping in the great outdoors. While sitting down for prolonged periods of time can shorten your life expectancy, standing for hours on end is very stressful on your musculoskeletal system. The Chairless Chair offers a fantastic compromise, especially in situations where chairs or stools are hard to come by. There are other solutions, of course, such as the incredibly low-tech Swiss Milking Stool — essentially a one-legged stool strapped to your ass — but the Chairless Chair is much more desirable due to the ability to turn it on and off, and the ability to accommodate a range of sitting positions.
Chairless Chair, sitting/standing in a crowded train
Chairless Chair, sitting/standing in a crowded train
“In addition to resting your leg muscles, it also provides optimal posture,” Noonee’s co-founder Bryan Anastisiades tells CNN. “It keeps your back straight and can reduce the occurrence of bad postures for both healthy workers and those recovering from muscle related injuries.” A large percentage of workplace injury and illness is caused by musculoskeletal disorders (MSDs), which are often as a result of poor posture, standing all day, etc.
Both Audi and BMW will be trialing the Chairless Chair on their production lines later this year. There’s no word on pricing or general availability, but I doubt it’ll be that expensive; it’s actually a fairly simple piece of gear. Noonee is initially targeting production line workers, fruit pickers, surgeons, and other groups of workers who spend hours standing every day – but the CEO, Keith Gunura, also mentions consumer uses, such as riding on a crowded train.
Ultimately, though, I’m still a bit uncertain about how seriously we can take a company who has trademarked the term “Chairolution.”

The first metamaterial superconductor: One step closer to futuristic physics-defying contraptions

Meissner Effect (superconducting magnetic levitation)


In the realms of electronics, magnetism, and quantum mechanics, superconductivity has an almost mythical status. Some materials, when cooled to a critical temperature, electrical resistance instantly drops to zero and magnetic fields are completely ejected (see video below). Superconducting magnets are already used in MRI machines and particle accelerators like CERN’s LHC, and are being considered for advanced maglev trains. Zero electrical resistance means that a current can flow around a superconducting coil indefinitely (at least 100,000 years) without any applied voltage — a feature that could completely revolutionize power distribution, power storage, electric motors, computers, and more.
The problem is, the hottest superconductor yet discovered still needs to be cooled to around -140 Celsius (133 Kelvin, -220 Fahrenheit) — and cryogenic cooling just isn’t feasible for everyday use. Now, however, some US researchers may have unearthed the secret of room-temperature superconductors: Building your own metamaterial superconductor from scratch.
As we’ve covered before, metamaterials are human-made materials that have alien, not-seen-in-nature properties. The most common example is negative refraction: In nature, every known material has a positive refractive index (it always bends light a certain way) — while metamaterials can bend light in the opposite direction. These materials have led to some interesting applications, such as invisibility cloaks. Now, researchers at Towson University, the University of Maryland, and the Naval Research Laboratory have done the same thing with superconductors: They’ve tweaked a compound in the lab, metamaterial-style, to raise its critical temperature. This empirical, deliberate approach is very different from usual superconductor research, which is mostly bested on educated guesswork. [arXiv:1408.0704  "Experimental demonstration of superconducting critical temperature increase in electromagnetic metamaterials"]
Various superconductors and their critical temperatures
Various superconductors and their critical temperatures
In theory, this is a very big step towards creating one of the most powerful, valuable, and elusive materials in the world: a room-temperature superconductor. While superconductors are already used extensively in science and medicine, the fact that they need to be kept at cryogenic (below -150C) temperatures makes them very expensive and unwieldy. A lot of work is being done into so-called “high-temperature superconductivity,” but the best anyone has managed is a critical temperature of -140C — HgBa2Ca2Cu3O(HBCCO) in case you were wondering.
Read our featured story: The wonderful world of wonder materials
In practice, the researchers still have a long way to go: Their metamaterial-like approach was able to raise the critical temperature of tin by 0.15 Kelvin. Still, in the realm of quantum mechanics where almost nothing is known about why or how superconductivity exists in the first place, it’s big news. We especially know very little about high-temperature superconductors – we think the “layers” of these complex compounds act like the electron equivalent of optical waveguides, steering electrons through the material with zero resistance. This new research might help us understand these high-temperature superconductors a little better, and maybe also to tweak them to move the critical temperature ever closer to room temperature.
Nexan superconducting power cable thing, cooled by liquid nitrogen
A prototype superconducting power cable — awesome, but commercially unfeasible as it requires constant liquid nitrogen cooling.
If we can eventually master superconductors — and there’s every reason to believe that we can — then we can expect many facets of life to change very rapidly. Superconducting power lines could save billions of dollars in transmission losses — or allow for the building of world-spanning super grids. We could replace every transport system with cheap, super-fast maglev trains. It might even allow for cloaking devices… and I assure you, that’s just the beginning!

The American Midwest: Traveling where the cloud can’t follow

Cloud Failure

Every year, my dad’s relatives get together for a family reunion. I love visiting with my family, but it comes with one major downside: the location. You see, my dad’s family all live in rural Ohio and West Virginia — where the cloud goes to die. Not only is cell coverage spotty, but residential internet access isn’t much better. Speeds are generally abhorrent, and you’re lucky if you can maintain a connection for an entire day in some parts of town. Against my better judgement, I decided to try to stream all of my entertainment during my most recent trip. Here’s how it went down.
The trip to Ohio is relatively painless, and I had a strong LTE connection for most of my time spent on the Pennsylvania Turnpike. It wasn’t until I left the major highways in West Virginia that cell coverage became a real problem. As you can see from Verizon’s official coverage map (below), large swaths of West Virginia and Ohio are white — meaning no coverage. In some cases, there wasn’t even enough signal for simple text messages to send properly. Trying to stream video or audio over a connection like that is a fool’s errand.
After a few failed attempts in the car, I decided to wait until I got settled into the hotel before I started testing my streaming services in earnest.
Verizon Coverage

A failure to communicate

Now, hotels aren’t exactly known for having the world’s most reliable internet connections, but I figured it was my best shot since it was in a city. My family lives about 30 minutes from the hotel, and it’s nothing but farm land and mountains out there. In other words, this was my chance. I connected my tablet, smartphone, and Vita to the hotel WiFi, and I started testing the connection. It seems that the average download speed was a little bit less than 1Mbps, and the upload speed was an abysmal 123Kbps. Even worse, the ping to the closest SpeedTest.net server was 100ms, so the round trip to my FiOS connection 300 miles away in Delaware was even worse. At that point, I knew for sure that any sort of fast-paced game would be impossible to control.
Speed Test in WV
To start my night of frustration off right, I attempted to connect to my PS4 over Remote Play. I opened the app, hit connect, and waited patiently for about a minute. It searched the local network first, and then checked with Sony’s servers to find the address of any paired PS4. It found my console, and began the process of connecting, but it bombed out after checking the “connection environment.” I couldn’t even get it to the main menu. The Vita just refused to finish the connection process — likely due to the bandwidth constraints.
Vita Cannot Connect
I switched my Vita over to a Verizon MiFi with a slightly faster connection, but it was still largely unplayable. Sure, the latency was bad, but the severe artifacting made reading some of the in-game text nigh-on impossible. After about a minute of running around in Watch Dogs, that connection dropped as well. Compared to my experience with the strong LTE connection in my home state, this was a complete failure.

PlayStation Now is even better than I hoped it would be

PS Now


Ever since the PlayStation Now beta launched at the end of July, I’ve been pondering the value proposition of Sony’s streaming offerings. During this soft launch, the selection of titles is extremely limited, and the pricing model doesn’t quite fit. However, the streaming technology itself is surprisingly solid, so it definitely can’t be dismissed outright. I’ll be the first to admit that there is plenty of room for improvement, but let me explain why PS Now is actually better than I expected it to be.
To get started with PS Now, all you need is a PS4 and a North American PSN account. Simply launch the PlayStation Store, and scroll down to the “PS Now” menu item. From there, you can select from roughly 100 PS3 titles that are currently offered. Unfortunately, you’re going to need to whip out your credit card at this point. The prices vary between the title and rental length, but everything is too high right now. At launch, the cheapest price was $3 for four hours, but a handful of games are now down to $2 for the same four hour chunk.
PS Now Game Page
For testing purposes, I ended up paying $8 for 30 days of access to Saints Row 2. With a service like Redbox, you can rent a game for roughly $2 per day. On Amazon, I can buy a physical copy of Saints Row 2 for under ten bucks. If the four hour chunk model is going to stay, it needs to dive below a dollar — preferably fifty cents a pop. If Sony wants to keep the entry-level purchase in the $2 to $5 range, the minimum number of rental hours needs to increase by at least sixfold. For the relatively ancient games available on that service right now, there’s no excuse for charging so much money. If Sony ever starts rolling out recent AAA titles on PS Now, then maybe we can talk about this premium pricing.
PS Now Main Menu
It’s certainly not all bad, though. The rental processes is as easy as buying any title off of PSN, and PS Now titles are featured on the main menu just like any native PS4 game. Provided that you have a substantial pipeline with relatively low latency to Sony’s servers, the experience is pretty much seamless. Load times don’t seem any worse than local copies of PS3 games, and the streaming experience has been rock solid the entire time. I haven’t seen any noticeable dips in performance, and latency hasn’t been an issue at all. Sure, the input lag isn’t going to match a local game, but everything that I’ve seen is playable. Even driving and fighting games have worked well for me, so I can’t complain.
After just a few minutes of playing a game over PS Now, I completely forget about all of the crazy streaming tech powering the experience. Frankly, that’s the highest praise I can give it. Provided that Sony can implement this same quality of service all over the world, it seems as if the core Gaikai technology powering PS Now is a real winner.
PlayStation TV
More than anything, I’m excited to see what Sony does next with PS Now. There has been plenty of talk surrounding PS1 and PS2 support, and it seems inevitable that we’ll see PS4 games grace the service before long. PS3, Vita, PS TV, and Sony smart TV support is definitely in the cards, and Sony already let slip that some sort of subscription-based pricing model is currently under development.
It’s now clear that game streaming has a lot of potential, and Sony has a commanding lead here. Microsoft has yet to publicly discuss a streaming or backwards compatibility strategy, OnLive is just now digging itself out of the hole it fell in a few years back, and Valve’s attempt to move into the living room is stalling. With a few tweaks here and there before PS Now leaves beta status, Sony might just have something special with PlayStation Now.

Roku takes aim at TV market with new Roku TV

Roku TV

Share This article

With built-in media apps becoming more widespread throughout the TV market, it seemed like the set-top media box would soon be a device of the past. Perhaps Roku has seen this writing on the wall, as it announced the Roku TV — televisions with Roku functionality baked right in.
If you’re planning on purchasing a television any time soon, the common wisdom is to buy a cheap set to hold you over until 4K TVs and content are widespread. That 4K pixel upgrade is just over the horizon, but if you need a new set right now, that horizon is too far away to pay the premium 4K price just to watch a couple Netflix shows and YouTube videos in the highest definition available. For now, your best bet would be something of a side-grade until you can make the full 4K commitment. Roku is betting on this mentality, and hoping that its new Roku TV looks like the best option to hold you over until 4K becomes the norm.
Roku has partnered with two television brands, Hisense and TCL, to incorporate all the functionality of a Roku into TV sets without the need for Roku’s own streaming box. Even though the best Roku available is just $99 and will only inhabit a tiny square of shelf space, these three companies have posed an interesting scenario to shoppers. If you’re at the store shopping for a new TV and media device, would you buy two separate items if you only had to buy a single combination — especially if no functionality is lost?
Roku remotes
Much like the regular Roku devices, the televisions come with a minimalist remote, though with a slightly different layout and without the motion control. You’ll also be able to control your television with your smartphone or tablet, which is convenient considering you’ll likely be messing around on one of those devices while only sort of paying attention to Netflix anyway.
As for the actual televisions, they’re not mind-blowing, but they offer exactly what a placeholder TV should while awaiting 4K. The LED sets range from 32 to 55 inches, offer 720p or 1080p, have 802.11 dual-band WiFi, 60Hz or 120Hz refresh rates, have HDMI and USB inputs, and optical and headphone audio. Hisense hasn’t set an MSRP for its sets, but TCL’s range from just $230 to $650.
As always, what makes Roku the best set-top media box (not counting various consoles, depending on how much you value big-budget gaming) is the breadth of services available. Most Smart TVs will come with the big-name services like Netflix, but only Roku has extremely specific, specialist channels. Obviously, though, if you need a new set-top media streamer but your current TV is going strong, it’s likely better to buy the set-top box than replace your television. If you need a new TV while you wait for 4K, though, the Roku TV might be your best option.

iPhone 6 rumor roundup: Will the next iPhone have a 4.7-inch sapphire display, NFC, or a reversible USB cable?

iPhone Sizes

Apple’s iPhone 6 press event is only a few weeks away — it’s scheduled for September 9 — so it’s no surprise that iPhone rumors are running wild right now. This time around, the rumor mill’s refrain is “bigger, stronger, faster.” The potential for larger screens, scratch-proof sapphire glass, and incredibly fast hardware specs is being widely reported by the enthusiast press and traditional news outlets alike. Unsurprisingly, Apple is going to have a lot of hype to live up to for the iPhone 6 release.
There are just too many iPhone 6 rumors floating around to count, so I’ve selected only a handful of the most intriguing examples to discuss today. Some of these are certainly more likely than others, but each one is incredibly tantalizing to gadget lovers. Whether or not you plan on buying Apple’s next smartphone, something here is bound to tickle your fancy. After all, once Apple implements a feature in its flagship product, other companies are bound to respond.
Size Comparison

4.7-inch and 5.5-inch iPhones

From the iPhone’s inception, there have been rumors about larger screens. Apple stuck with the 3.5-inch screen for years before finally upgrading to a 4-inch screen with the iPhone 5 in 2012. Now, the rumor mill is betting heavily on Apple releasing two new screen sizes: 4.7-inches and 5.5-inches. There doesn’t seem to be much consensus on whether or not these larger phones will be offered alongside the 4-inch models, though. If I had to guess based solely on my gut feeling, Apple probably won’t ditch its 4-inch phones any time soon. Besides, you can still buy a 3.5-inch iPhone 4S directly from the Apple Store.
We’ve been seeing these screens and miscellaneous phone parts show up frequently across the net, and these larger devices are certainly the most wide-spread rumor this time around. However, popularity doesn’t make the rumors true. Remember back when we were supposed to have a teardrop-shaped iPhone? That rumor ran rampant up until the moment the next model was actually announced with a uniform flat design. Unless you’re being told something directly by an Apple executive, take any discussion of future Apple products with a grain of salt.

Sapphire screens

The word on the street is the new iPhones will sport sapphire screens. This rumor has been very persistent, and with good reason. As you can see in the video embedded above, sapphire glass is incredibly durable and scratch-resistant, so it’s perfect for smartphone screens. Stab it, scrape it, or shove it in your pocket with your keys — it makes no difference.
Considering how often I see friends and family with jacked-up smartphone screens, this would be a huge improvement. Unfortunately, we might have to wait another generation for these super screens. JP Morgan’s Rod Hall seems skeptical that Apple will be able to push out sapphire screens in 2014. If Hall is right, we might have to wait for the iPhone 6S or iPhone 7 for these nigh-on indestructible screens.
iPhone 6 Shell

iPhone 6 shell

So, individual parts have supposedly leaked here and there over the last couple of months, but an assembled iPhone 6 shell has just recently surfaced. These images are allegedly of a 4.7-inch iPhone 6, and it looks incredibly slick. If the real McCoy ends up looking like this, I’ll be pleased. However, Apple’s design aesthetic is constantly being aped by Chinese counterfeiters, so it’s not much of a stretch to think that something this polished could be completely fake.
iPhone 6 Batteries

With the addition of bigger screens comes the need for bigger batteries, right? According to a number of leaks, we’ll be seeing some notable bumps in the 4.7-inch and 5.5-inch models. One leak has the 4.7-inch iPhone 6 sporting a 1810 mAh battery while another photo shows a 2915 mAh battery for the 5.5-inch iPhone 6. Considering that the iPhone 5S’s battery is roughly 1570 mAh, these are substantial jumps in power. I’d certainly like to see a longer battery life for my next phone.

Windows 9 technical preview, the first step towards fixing Windows, may appear next month

Windows 9 Threshold Start menu crop


Windows 8 was supposed to finally unify the computing experience, bringing tablets and PCs together with Microsoft’s modern UI at its core. To say that hasn’t worked out would be a grotesque understatement. Consumers by and large have either avoided Windows 8 or managed to put up with its tablet-oriented feature set while grumbling to anyone who will listen. In response, Microsoft is accelerating its release cycle, and sources now say the first technical preview of Windows 9 (codenamed Threshold) will be out in late September or early October.

Threshold is going to be the logical continuation of changes the company started to make in Windows 8.1. Microsoft is rolling back the clear tablet focus for desktop users while maintaining usability on tablets in hopes Windows slates will finally catch on. One of the most complained about features in Windows 8 is the full screen start menu with big finger-friendly tiles. It matches Windows Phone and Xbox, but it makes no sense on a PC. Threshold will likely signal the return of the desktop start menu, but it will have a metro flair with smaller live tiles and smarter search. The 8.1 update added the visible start button back to the taskbar, so Windows 9 is just finishing the job.
Microsoft’s updated design aesthetic was carried through into apps built to work on Windows 8. These full screen apps were originally called “Metro,” and that will probably always be the most common name for the design style no matter how much Microsoft tries to distance itself from the term. Windows 8.1 added the ability to run metro apps in split screen mode with a maximum of three of four (depending on screen resolution), but Threshold will make metro apps fully resizable in traditional floating windows. The OS is still called Windows, after all.
Start Screen
Whether or not you use Metro-style apps on Windows 8, there’s always that strange Charms bar hiding on the side of the screen. You access it on tablets with an edge gesture, or by mousing to the corner on a regular PC. Threshold will allegedly kill that UI element, which really only served to hide common features in an unusual and not very discoverable place. That’s a lot of do-overs, but what about new stuff? Based on the information so far, Microsoft’s big new feature will be integration with Cortana, the voice assistant from Windows Phone. Windows 9 could also sport multiple virtual desktops. What do you do when desktop users aren’t happy? Give them more desktops.
The first technical preview might not show off all these changes, especially the much anticipated interface stuff. As a technical preview, it’s mainly intended to give developers a head start in ensuring software compatibility, so don’t expect a huge departure from Windows 8 just yet. Remember, the first developer builds of Windows 8 still had the translucent Aero UI. Despite the developer slant, Microsoft might still allow anyone to grab the first version of Threshold and take it for a spin.
You’ll probably want to wait at least until the consumer preview of Windows 9 to leave Windows 8 in the dust. Judging from Microsoft’s quicker update cycle, a consumer preview should be out around the end of 2014 with most of the final feature set in place. Windows 9 is expected to ship in Spring 2015, and only then will we see how successfully Microsoft has responded to customer complaints.

AMD teams up with OCZ to launch its first SSDs, the Radeon R7 series

AMD R7 SSD

Share This article

For the past few years, AMD has been exploring products outside its usual CPU/APU/GPU divisions. Part of that process has been the company’s move into its own server hardware with SeaMicro, its semi-custom wins with the Xbox One and PS4, and the growth of its semicustom business as a whole. At the same time, however, AMD has pushed into the consumer component space with Radeon-branded memory — and now, with Radeon-branded SSDs.
As with its memory offerings, AMD isn’t fabbing its own equipment — its paying other companies for specific SKUs and products that it then rebrands as its own. In this case, AMD is partnering up with OCZ to launch a new set of SSDs in the consumer market.
OCZ’s own history with SSDs has run the product gamut. In the beginning, the company was an early enthusiast leader; its Vertex and Vertex 2 families broke speed records and were quite affordable (by the standards of the day). OCZ was bitten badly by the early firmware bugs that struck the SandForce family and the company’s follow-up drives had issues of their own. Now, AMD and OCZ are teaming up to offer a product they feel combine the best of both manufacturers.
AMD-OCZ-3
The new AMD Radeon-branded SSDs will be the first drives to use Toshiba’s new A19nm NAND. It’s not entirely clear how this differs from previous NAND generations, but we’re betting that Toshiba has continued iterating on its previous process to product an incrementally better product). AMD is claiming that the drives use a specialized firmware variant and an overclocked firmware, but we expect that the differences, again, are fairly modest. AMD is staking its claim on reliability and durability more than raw performance with a longer warranty and higher write period (30GB per day, up from 20GB on a standard drive).
AMD-OCZ-1
We’re going to have performance tests in the near future, but the bigger picture isn’t about the drive’s standout performance. The focus here is on the total package AMD is putting together and the way it hopes to create a marketing position as a late entrant. The drives should be decent performers with good characteristics and reliability — AMD has emphasized to us that it waited specifically for the A19nm NAND to be ready for certain metrics before it went ahead with the launch.
The company is targeting aggressive price points — $99 for a 120GB drive, $164 for a 256GB drive, and $290 for a 480GB drive. That’s good, but it’ll have a hard time competing with the Samsung 840 EVO, which hits $250 for a 500GB drive. Here, AMD is hoping that longer warranty terms and a higher endurance (in terms of GB/day) will win over customers. It may have a point — while we love the EVO as a consumer drive, customers who prioritize reliability above all else may still want to steer clear of TLC NAND in general.

The PS4 is still selling much faster than expected – and Sony doesn’t know why

PS4 stock, in an Amazon warehouse


Sony’s PlayStation 4 has been a bona fide sales sensation, lighting up the charts on its way to 10 million units sold since launch — and amusingly enough, Sony’s own executives don’t actually know why the PS4 is selling so well. While the company’s sales trajectory is excellent, there’s some concern that an early burn could lead to a catastrophic flame-out at a later date.
Speaking to Eurogamer, president of Sony Worldwide Studios Shuhei Yoshida spoke about the triumphs and challenges of the console, including the company’s investigation into what features, exactly, consumers are buying it for. This has been a bit confusing — thus far, there simply isn’t any single clear reason why gamers are buying in such numbers or opting for the PS4 over the Xbox One.

Multifaceted appeal is a sign of strength

A quick glance at sales charts seems to confirm Sony’s statements that there’s no clear single driver of the PS4′s popularity. Almost all its top-10 titles are available on other consoles. Historically, there have been times when a console was strongly associated with sales of a particular title — but the trend is not an absolute one.
Top-PS4
Chart courtesy of VGChartz
When Yoshida says that Sony is concerned about exhausting the core gamer market, it’s not a ridiculous fear. The PlayStation Vita, Nintendo 3DS, and Wii U all debuted to strong initial sales only to fall off a cliff once early demand was met. Nintendo kept the 3DS alive with steep price cuts and game bundling, but the Wii U continues to lag. Sony lost billions on the early PS3 ramp — keeping the price up is an essential part of the console’s long-term strategy.
In retrospect, however, it may have been a mistake to read too much into Nintendo’s performance and what it said about the next-gen console business. Nintendo, by its own admission, is a software company that happens to make a hardware platform. Third-party titles are known second-class citizens in Nintendo’s eyes — you buy a console from them because you want to play Smash Brothers, Zelda, Metroid, or Mario — not because you’re interested in titles from EA or Ubisoft. That focus pays off when Nintendo delivers excellent games, but it leaves even the die-hard faithful with little reason to buy a console before the games are ready. There’s no point in spending $250 now for a paperweight until your favorite franchise releases a game.
With Sony and Microsoft, there’s a much greater sense of buying into a system — and while the PS4 doesn’t have any single game that seems to be driving sales, that could mean the floor is wide open for multiple console-defining titles to drop and seize the day.

PS4: A great success for pedestrian reasons?

PS4, in pieces [Image credit: iFixit]
PS4, in pieces [Image credit: iFixit]
My own theory is that the PS4 has succeeded thus far on its overall strength rather than a single killer feature. In 2006, Sony was the last, most costly console with an expensive bet on a then-unproven feature. It had abandoned defining features like rumble in favor of a new six-axis control scheme. It had a disastrous launch campaign and a miserable launch attach rate thanks in part to an incredibly difficult architecture and a weak developer support system. When the head of the company proudly attests that the console is supposed to be difficult to program because if it wasn’t it might threaten Sony’s revenue stream, it’s not a great way to build developer support.The PS4, in contrast, had none of that baggage. As Microsoft floundered in repeated attempts to find and address a core market, Sony stuck to a dirt-simple message on price and focus. While Microsoft reversed course on Kinect (before finally killing the feature), Sony simply stuck to its guns. This time around, it was Sony out in front with indie announcements, Sony with more powerful hardware, Sony that had the more consistent message.
Before anyone jumps on me for being a fanboy, keep this in mind: I’m not declaring the PS4 a better console — I’m looking to explain why it has objectively sold more units. Unlike the PS2 and PS3, both of which debuted as new content delivery standards (DVD, Blu-ray) were coming online (and were rather excellent players), the PS4 doesn’t offer a new method of content consumption. Nor has its adoption spiked a jump in Blu-ray revenue (physical media remains a huge business but is sharply off its peak as shown below).
Media sales
It’s not clear how many truly new sales Sony has driven. Yoshida says that “some of the early data was amazing in terms of the number of people who didn’t used to own PS3 have already purchased PS4… And some people never purchased any last-gen hardware: PS3, or Xbox 360 or Nintendo Wii.”
That’s two separate data points, and without percentages, we can’t really judge the impact. If 50% of the people buying PS4s owned an Xbox 360 but never a PS3, that would be a huge sign that Sony had seized market share from Microsoft. If 10% of the people buying a PS4 owned an Xbox 360, and 5% had never owned a previous-generation console, that’s still important — but the impact of the numbers is shaped by their size, and Yoshida didn’t reveal that data.
Absent reason to believe otherwise, we’re betting that the PS4 is firing on all thrusters rather than being propelled by any single feature. That’s got to be worrisome for a company trying to understand the appeal to new customers, but the fact is, gamers, en masse, appear to be buying into both platforms despite fears that game consoles were outdated or that neither system would be appealing.
That’s a win-win for everybody.

Smartphone usage surges while PCs show startling decline in new worldwide study

there-can-only-be-one-highlander-smartphone

Share This article

One of the widely discussed trends of the past few years has been the decline in PC sales and the rise in tablet sales. Tablets, more than smartphones, are credited with harming the US market — few people view a smartphone, even a highly capable device, as a complete desktop or laptop replacement.
Because so much of the discussion has been tablet- and US-centric, we’ve missed the impact that tablet and smartphone sales have had on the worldwide electronics market. Now that information has been combined and visualized for the first time, and while the data points aren’t ironclad (we’re drawing from a single source for this information), some of the trends are stark.
Some of these data points are so strong that they seem inaccurate — according to the chart, India’s PC usage has fallen to just 10% before rebounding slightly this year. In fact, we suspect there are anomalies in the data — if the Indian information were accurate, it would mean that PC usage had declined sharply (from 40% to just a little over 10%) while mobile use only increased from perhaps 13% to 22% over the same period. Since it’s unlikely that total internet use has declined so steeply, this suggests flaws in the data set.
Nonetheless, Gartner Research does back up the idea that the Indian PC market is in sharp decline: In Q1 2014, sales in the Indian market had fallen to 70% of their Q1 2013 level, while smartphone sales for the same period had more than doubled. Sales data isn’t the same as internet usage information, but the trends do point in the same direction — sharply rising sales of smartphones as conventional PCs drop.
Even in nations where PC usage hasn’t declined, it’s mostly stopped increasing. Mobile usage has grown dramatically almost everywhere, while the PC remains flat.

Upending conventional wisdom

Steve Jobs and the original iPad
Steve Jobs and the original iPad
Once upon a time, the conventional wisdom was that the developing world would industrialize and adopt PCs as it did so, either by buying into lower-end replacement PCs that had reached the end of their Western lifecycles or by purchasing new machines custom-built for their particular markets. Intel and AMD have both made various efforts to stoke this trend, and companies like Dell and HP expanded into emerging spaces. Now, that market trend is eroding on the back of smartphone sales, sometimes for reasons that have nothing to do with OS preferences or even convenience (at least, as Western buyers typically understand the world).
When the electrical grid is practically as constant as the sunrise, every home has wired connectivity of some sort, and landlines are something you opt out of rather than something you’d give your eyeteeth to acquire, we’ve got the luxury of predicating our product choices based on niceties of form factor and particular suitedness to a certain task. In nations where electrical power and data connections are both spotty, a device with substantial amounts of battery life that can recharge quickly and travels easily is vastly more useful — and that’s before you consider the dramatically increased performance and capability of smartphones to start with.

Why the sales trend matters

In the 1960s, the titans of the computing industry were companies like Burroughs, Control Data Corporation, GE, IBM, NCR, Honeywell, RCA, and Univac. Not all of these companies are gone today, but of the ones that still exist, only one of them continues to be a major player in the computing industry — and the IBM of 2014 is fundamentally different from the IBM of the 1960s, even if it remains a pioneering research firm.
These companies didn’t just own the hardware businesses that built computers, they fundamentally owned the idea of what a computer was. This is easiest to see by looking at the fiction of the period — the computers of Star Trek were essentially mainframes and this continued in Star Trek: The Next Generation and subsequent TV shows. For all its predictive capabilities, Star Trek never showed a handheld computing device that was as flexible or capable as the modern iPhone.
The Ultimate Computer
From “The Ultimate Computer.” Entire Star Trek episodes were built around the idea of mainframes gone wild.
Star Trek, of course, is scarcely representative of all science fiction of the 1960s, but many other authors made fundamentally similar assumptions — vast computing resources would be centralized, with planetary-scale AIs or Galactic Libraries.
In the same way, Microsoft and Intel were fundamental to the idea of what a computer was for several decades. Both companies have lost the chance to define the mobile era — Intel took a shot at doing so with Mobile Internet Devices, or MIDs, but failed — but neither can afford to be relegated to completely also-ran status in their own industries. Hence, both continue to create mobile products and to push into these spaces.
Read our featured story – There can only be one: Smartphones are the PCs of the future
Whether this trend will continue is an open question. There’s already signs that tablet sales are cooling in the US and other developed countries. If developing nations follow the same trend, we’ll see smartphone sales begin to drop as well — or the continued commoditization of the product line could spur fresh growth as lower price points allow manufacturers to appeal to broader and broader market segments.

RISC rides again: New RISC-V architecture hopes to battle ARM and x86 by being totally open source

Prototype RISC-V chip


One of the pioneers of the original RISC instruction set has returned to the design table with a goal that’s nothing short of massive. David Patterson wants to reinvent computing with a completely open ISA, and he’s hoping that the time is right to finally blow the doors off the CPU industry — this time, by advocating for the adoption of the completely open ISA, RISC-V.
There are already a variety of open ISAs, but Patterson is hoping RISC-V will spark interest and uptake where other projects have sputtered. It’s hard to argue with the man’s credentials — he’s one of the original inventors of the RISC concept — but some of his critiques of the problems he wants RISC-V to solve ring truer than others.

Why RISC it? (sorry)

According to the whitepaper published by UC Berkeley, there are multiple reasons to opt for a RISC-V design, including restrictive IP agreements from companies like ARM and IBM, limited options for free licensing, and the length of time it takes to create a license. The paper also argues that RISC-V is supposedly superior to other ISAs because it’s learned from their various mistakes and incorporates a better mix of capabilities.
RISC V
RISC-V is designed for ultra-compact code sizes, allows for quadruple precision (128-bit floating point values) and can allow for 128-bit memory addressing — though it’s utterly impractical to think this will be needed in the short term. The whitepaper points out, however, that address size limitations is one mistake an ISA makes that’s hard to recover from — RISC-V’s 128-bit limit should serve us for the next 40-50 years.
It’s not clear, however, how much momentum actually exists around the standard. The whitepaper points to eight chip designs — a prototype RISC-V chip is pictured at the top of the story — Berkeley has already implemented, and claims that a RISC-V core is substantially more efficient than even a competitive ARM core, as shown below:
RISC V
The problem with RISC-V is that the target markets — small companies looking for extreme customization — simply may not be big enough to ever spark much in the way of toolset development, familiarity, or cost savings. How many companies both want to build their own extremely customized architecture and can afford to hire the engineers that would do the job more ably that a default Cortex-A5 CPU from ARM? Our guess is not many. This leaves RISC-V in an uneasy no-man’s land — the engineers with the expertise to build the products are most likely familiar with other ecosystems, while the companies that would most benefit from the cost savings and customized features can’t afford the engineers.

Reigniting the great debate: CISC vs. RISC

Sun UltraSparc chip
Sun UltraSparc chip. Back in the olden days, there were a lot of RISC chips. With the success of Intel and x86, though, most of the RISC designs were squeezed out.
Furthermore, while the whitepaper leans on the idea of ARM as a RISC design that’s (supposedly) vastly more successful than Intel based on total number of CPUs shipped, that comparison is flawed for a number of reasons. I don’t want to rehash the CPU wars of the past decades in this story, but it’s worth revisiting the old paradigms of CISC and RISC that applied when Patterson did his first groundbreaking work in RISC. The CISC designs of the 1960s and 1970s often emphasized doing as much work as possible per instruction. Memory was both incredibly slow and very small — the more work you could pack into every single cycle, the less assembly code you had to write, the more compact your code could be, and the higher the throughput of the system (at least in theory). Some CPUs could support high-level programming features directly in machine code.
The original RISC philosophy argued that by doing less work per cycle, developers could drastically simplify their designs, cut the complicated number of corner cases and operations they supported, increase CPU clock speed, and reap enormous rewards with smaller designs and far smaller transistor budgets. In the beginning, there were enormous differences of scale that fed RISC’s advance, and while RISC-based architectures never made huge inroads into the PC business, they were extremely successful in servers and embedded product lines.
Read our featured story: 4004 to Sandy Bridge: A walk down CPU memory lane
As time passed, however, the line between CISC and RISC began to blur. CISC chips became more RISC-like, while RISC chips ramped up complexity, capabilities, and die size. A good example of this trend is the entire ARM vs. x86 debate. While the two ISAs are absolutely different, research has repeatedly shown that power consumption, clock speed, performance per watt, and instructions executed per clock cycle (a measure of efficiency) are all dependent on the CPU’s physical architecture — not its actual ISA.
If there’s a reason to be optimistic, however, it’s this: For decades, both CISC and RISC designs were mostly driven by brute-force approaches. When the number of transistors is doubling every 18-24 months with commensurate decreases in power consumption, it’s easy to solve problems by throwing transistors at the problem.
We’ve long since hit the point of diminishing marginal returns from that approach — and that means RISC-V’s brand-new ISA with its
The RISC-I chip, developed by UC Berkeley. 44,420 transistors, 5-micron (5,000nm) process, running at 1MHz
A die shot of the RISC-I chip, developed by UC Berkeley. 44,420 transistors, 5-micron (5,000nm) process, running at 1MHz
emphasis on efficiency and performance per watt could be an approach that yields dividends when other, more traditional (and to be honest, simpler) methods of extracting further gains have been exhausted. It’s also noteworthy that the architecture appears to target the lowest end of the ARM Cortex division — areas where, as we’ve recently discussed, the CISC vs. RISC debate actually retains a shred of relevance. In areas where every square millimeter counts and power consumption is absolutely critical, RISC-V might offer advantages.
Whether these gains could be sustained against dominant players in the market like Qualcomm, Intel, and other companies that would be adopting them as well, is an open question. If history is any judge, it takes far more than some basic ISA support and theoretical appeal to seize market share from more dominant, established players — and we’re dubious of openness as an intrinsically important factor, despite the paper’s reliance on it as a prime justification.

Windows 9 is set to be unveiled on September 30

Windows 9, with resurrected Start menu and Metro apps running in a Window on the Desktop


Microsoft is planning to release a preview build of Windows 9 at a special press event on September 30, according to sources familiar with Microsoft’s plans. The date could still change, but September 30 lines up neatly with previous leaks that suggested a late-September or early-October release date for the Windows 9 technology preview. It’s still unclear exactly how many of Windows 9′s hotly anticipated features will actually make it into the September 30 release — but hopefully we’ll at least see the resurrected Start menu and Metro apps running on the Desktop. We wouldn’t be surprised if you have to wait a little longer for the consumer preview of Windows 9 before you get to play with your new Cortana digital assistant, however.
A few days ago, news leaked that Microsoft was planning to release the first public build of Windows 9 at the end of September or beginning of October. Now, sources are telling The Verge that Microsoft is planning a Windows 9 press briefing on September 30. The tech preview build of Windows 9 will likely be released at the event or shortly after. Hopefully everyone will be able to download the Windows 9 preview — just like the early public builds of Windows 8 — but there’s a chance that Microsoft will only release it to developers and professionals via TechNet and MSDN.
Windows 9, build 9788, leaked screenshot showing PC Settings Metro app running in a window on the Desktop
Windows 9, build 9788, leaked screenshot showing PC Settings Metro app running in a window on the Desktop. Microsoft’s current Windows 9 builds are around the 9820 mark.
This technology preview of Windows 9 will contain a lot of new features, but it won’t be feature-complete. In much the same way that the first Windows 8 preview still looked a lot like Windows 7, expect the Windows 9 preview to be a Frankensteinian hodgepodge of new and old features. We would expect the new Start menu to make it into the tech preview build, and the ability to run Metro apps in a window on the Desktop, but beyond that is anyone’s guess. One of Windows 9′s larger new features — integration of Cortana — might not make the cut. You should also expect a lot of smaller changes — UI tweaks, new stock Metro apps, etc. — to pop up a couple of months later in the first Windows 9 consumer preview.
Microsoft might also use the September 30 press event to tell us about the fate of Windows RT, too, which is being integrated into Windows Phone as part of the grand unified theory of Windows.
Cortana, digital assistant
Cortana, from the Halo universe, is expected to make an appearance in Windows 9
Microsoft, with its accelerated release schedule and exciting features like Cortana and virtual desktops, is clearly trying to prove that it still cares about normal (laptop/desktop) PC users. While Cortana is useful on a smartphone, I think it might be surprisingly powerful on a laptop or tablet as well. I might be getting a bit ahead of myself here, but it would be really cool if you could say “Cortana, show me all of my photos from 2012″ rather than fiddling with various filters and search boxes in Explorer. Using Cortana on a PC could be just like the voice-activated computer in Star Trek — if Microsoft does it properly, anyway, and doesn’t just half-assedly drop the Windows Phone version into Windows 9.

Google has built a Matrix-like simulation of California to test its self-driving cars

Sergey Brin (Google) in the Matrix, as Neo, stopping bullets


Google, it has emerged, has built a “Matrix-style” simulation of the entirety of California to test its self-driving cars. While this in itself isn’t massively surprising given Google’s history as a software company (though it is a bit scary), the company is also petitioning California’s state officials to allow safety testing within the Matrix, instead of testing on real roads. This might sound a little terrifying — imagine if Ford started selling a car that had never been road-tested — but it makes quite a lot of sense for a self-driving car, where there are millions of conditions that need to be tested — conditions that are essentially impossible to test in real-time on real roads. “In a few hours, we can test thousands upon thousands of scenarios which in terms of driving all over again might take decades,” a Google spokesperson told the Guardian.
Information about Google’s Matrix-like simulation of California was obtained by the Guardian – via a freedom of information petition to California officials, and then some further information from a Google spokesperson. Google has built the entirety of California’s road system (about 172,000 miles) in software, along with accurate simulations of traffic, pedestrians, weather, and so on. There’s no word on the hardware being used to create the Google Matrix, but it’s probably a fairly large cluster of servers.
Google self-driving car prototype, real thing
Google self-driving car prototype (this is a real thing)
Google’s virtual self-driving cars have so far driven more than 4 million miles within the Californian Matrix, facing all of the usual challenges that its real-world self-driving cars might face (wobbly cyclists, vehicles running a stop sign, etc.) By comparison, Google’s physical fleet of self-driving cars (mostly modified Toyota Priuses) had only driven 700,000 miles as of April 2014 — and more importantly, they have only driven on around 2,000 miles of road.
Google's self-driving car, with Schmidt, Page, and Brin
Most of Google’s self-driving car fleet consist of modified Toyota Priuses
Clearly, especially when it comes to self-driving cars, detailed simulations can provide even more feedback than real-world testing. Currently, most countries require real-world testing for new cars — on test tracks, closed public roads, etc. In California, self-driving cars are beholden to the same regulations: They must be road-tested.
Google, however, wants California to change that policy, to allow for self-driving cars that have only been tested in the Matrix. Earlier in the year, according to the Guardian’s freedom of information request, Google wrote the following to California state officials: “Computer simulations are actually more valuable, as they allow manufacturers to test their software under far more conditions and stresses than could possibly be achieved on a test track … Google wants to ensure that [the regulation] is interpreted to allow manufacturers to satisfy this requirement through computer-generated simulations.”
While I’m all for Google carrying out exceedingly detailed simulations — and I think it’s the only way that Google can bring a fully autonomous car to market within a reasonable time frame — I think some level of real-world testing is probably a prudent idea. You can make a simulation as detailed as you want, but it can never quite capture the full gamut of weird and wonderful things that a brain-powered pedestrian or fellow road user is capable of.
If Google wants to put vehicles on the road that are completely controlled by an on-board computer, and wants to avoid the tsunami of lawsuits that will surely follow, it will need to really prove beyond doubt that its self-driving AI is comparable to a human driver. This is a very different approach from conventional car makers, which are coming from the opposite (and much safer) direction — taking a normal car and slowly adding self-driving-like features. There are still some serious questions about how we handle the ethics and morality of robots — and an autonomous self-driving car is just one particular breed of robot. I think we are quite a few years away from building a fully autonomous car that can make such decisions as “do I run over the cat, swerve into a tree, or brake hard and cause a pile-up.”
In other news, California also recently ruled [PDF] that any self-driving cars must be fitted with a backup steering wheel, for situations where “immediate physical control” is required. This is a bit of a hit to Google’s new self-driving car prototype, which eschews the standard car controls for a single “Go” button.

Photorealism in Unreal Engine 4 in real-time: A sneak peek at next-gen games graphics

UE4 archviz, lake and leaves


If you were wondering what kind of graphics we can expect from true next-gen games on the Xbox One, PS4, and PC, feast your eyes on the photorealistic visual wizardry of French artist Koola. Do not adjust your screen: All of the images and videos that you see in this story were generated in real-time using Unreal Engine 4 (UE4), at decent frame rates (~30-60 fps) on a mid-range gaming PC.
Koola’s work, which was first shared on the Unreal Engine forums, almost entirely revolves around UE4′s enhanced lighting, including lightmass global illumination. There are undoubtedly some high-resolution (4K?) textures involved, too, and Koola admits that he uses some high poly counts for some models — but for the most part I think we’re mainly ogling some some delicious, precompiled light maps.

UE4, lighting, koola, chair
UE4 demo, by Koola. Click to zoom in.
UE4, high-poly trees, koola
Some high-poly UE4 trees, by Koola. Click to zoom in.
The videos are captured straight from Koola’s PC, which sports a Core i7-3770 CPU and GTX 670 GPU (i.e. a high-end gaming rig from last year). Most of the demos run in the 50-60 fps range, with some dipping down to 30. Except for the high poly counts, the demos aren’t actually all that strenuous — but it is important to note that these demos are primarily intended as architecture visualization (archviz). From the outset, Koola was trying to recreate scenes in UE4 that have the same characteristic photorealism of ray-traced art — and I think we can all agree that he’s succeeded.
Read our featured story: The future of real-time ray tracing
Whether Koola’s efforts can be directly transferred to gaming, I’m not sure. Games generally have many more (dynamic) objects on-screen, which would probably necessitate lower poly counts and smaller textures. Precompiled light maps aren’t so great when objects move around a scene, too (though UE4′s real-time lighting is pretty darn good).
In any case, I’m absolutely certain that we will see some absolutely beautiful games when developers finally start targeting new engines like UE4, and new hardware like the Xbox One, PS4, and modern PCs. Today, we are still mostly seeing games that target Xbox 360- and PS3-era software and hardware. After treading water for almost a decade, I think the next year or two will finally see a massive boost to game graphics.