the unofficial Slashdot digest

Machine Learning Expert Michael Jordan On the Delusions of Big Data

Posted by samzenpus in Management • View
First time accepted submitter agent elevator writes In a wide-ranging interview at IEEE Spectrum, Michael I. Jordan skewers a bunch of sacred cows, basically saying that: The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool's errand; and despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

U.K. Supermarkets Beta Test Full-Body 3D Scanners For Selfie Figurines

Posted by samzenpus in News • View
Lucas123 writes Walmart-owned ASDA supermarkets in the U.K. are beta testing 3D full-body scanning booths that allow patrons to buy 6-in to 9-in high "selfie" figurines. Artec Group, a maker of 3D scanners and software, said its Shapify Booth, which can scan your entire body in 12 seconds and use the resulting file to create a full-color 3D printed model, is making its U.S. debut this week. The 3D Shapify booths are equipped with four wide view, high-resolution scanners, which rotate around the person to scan every angle. Artec claims the high-powered scan and precision printing is able to capture even the smallest details, down to the wrinkles on clothes. The scanning process generates 700 captured surfaces, which are automatically stitched together to produce an electronic file ready for 3D printing. Artec offers to print the figurines for booth operators (retailers) for $50 for a 6-in model, $70 for a 7.5-in model, and $100 for a 9-in figurine.

The obvious question is

By ruir • Score: 3 • Thread
Does it print *naked* figurines?

Boycott ASDA

By hughbar • Score: 4, Informative • Thread
Many of us boycott ASDA anyway, since it's Walmart. Waitrose provides good food and pays its staff. And no, I don't work for Waitrose.

Re:UK article, US units

By 91degrees • Score: 5, Funny • Thread
Well, given that fuel is dispensed in litres, but distances are measured in miles, wine is measured in ml and beer in pints, the systems we tend to use are somewhat fluid.

wide like whoa

By darkitecture • Score: 3 • Thread
>>Artec Group, a maker of 3D scanners and software, said its Shapify Booth, which can scan your entire body in 12 seconds and use the resulting file to create a full-color 3D printed model, is making its U.S. debut this week.
>>The 3D Shapify booths are equipped with four wide view, high-resolution scanners, which rotate around the person to scan every angle.

It's the US, you better fucking hope they're wide view!

New Microsoft Garage Site Invites Public To Test a Wide Range of App Ideas

Posted by samzenpus in Developers • View
An anonymous reader writes Microsoft today launched a new section on its website: The Microsoft Garage is designed to give the public early access to various projects the company is testing right now. The team is kicking off with a total of 16 free consumer-facing apps, spanning Android, Android Wear, iOS, Windows Phone, Windows, and even the Xbox One. Microsoft Garage is still going to be everything it has been so far, but Microsoft has simply decided it's time for the public to get involved too: You can now test the wild projects the company's employees dream up.

Unpaid labour?

By innocent_white_lamb • Score: 4, Insightful • Thread

Could someone tell me why we would want to do unpaid labour for Microsoft?

I'm quite prepared to test and help support Linux and open source projects. Microsoft? Not so much....

And in the fine print...

By Torp • Score: 3 • Thread

... you indenture your first born to Microsoft in exchange for using their apps.
Also, you are liable for patent fees.

Will Fiber-To-the-Home Create a New Digital Divide?

Posted by samzenpus in Technology • View
First time accepted submitter dkatana writes Having some type of fiber or high-speed cable connectivity is normal for many of us, but in most developing countries of the world and many areas of Europe, the US, and other developed countries, access to "super-fast" broadband networks is still a dream. This is creating another "digital divide." Not having the virtually unlimited bandwidth of all-fiber networks means that, for these populations, many activities are simply not possible. For example, broadband provided over all-fiber networks brings education, healthcare, and other social goods into the home through immersive, innovative applications and services that are impossible without it. Alternatives to fiber, such as cable (DOCSYS 3.0), are not enough, and they could be more expensive in the long run. The maximum speed a DOCSYS modem can achieve is 171/122 Mbit/s (using four channels), just a fraction the 273 Gbit/s (per channel) already reached on fiber.


By ArmoredDragon • Score: 5, Insightful • Thread

Just to elaborate...the author is extremely vague here. Let's just pick an arbitrary number, say 10mbit, which is actually quite slow (in my opinion, but the local cable co provides 150mbit connections, and just started rolling out gigabit, so maybe I'm biased.)

Anyways what services CAN'T you obtain at 10mbit? Nothing health related comes to mind, nothing education related comes to mind, and social goods..what the FUCK does that even mean? Anyways, a 10mbit link is fully capable of streaming 1080p video, which is about the most demanding consumer grade application I can think of.

Therefore, I have no idea what possible "divide" the author could be referring to. Furthermore, the author strikes me as being grossly uneducated about the topic because of the blatant misspelling of the acronym DOCSIS.

If he wants to make a better case (which it sounds like he's pushing for some kind of socialist and/or social justice agenda) then he should at the very least give examples of WHAT, EXACTLY these people wouldn't have access to.

He would have a case for a slow upstream (it's common for DSL providers to only provide less than megabit data rates) in health care if, say for example, a medical practitioner needed an HD video feed to evaluate their patient (which doesn't seem to be a likely scenario) but he didn't state that. But, that still doesn't apply to anything else he mentioned.

Third World America

By Required Snark • Score: 3 • Thread
This is another symptom that the US is sliding out of the first world and into the third world. It goes along with our creaky unmaintained road, water and sewage infrastructure, along with our badly out of date airports and crappy passenger rail system.

And then there's our overpriced and underperforming health delivery system. (Note: ACA/Obamacare is a part of the solution, not a part of the problem.) And our failing K-12 education, which is severely underfunded and strangling on bureaucracy.

Along with the steadily declining state level college/university systems. (And before the right wingers start screaming about foreign students, remember that they come from places where it's much harder to get into any school and a lot of the higher educations options are not as good as the US, even with our decline. Both public and private schools love out of country students because they pay full tuition.)

But it's all OK, because the upper 10%, and mostly the upper .01% and above are doing really good. For example six members of the Walton family had the same net worth as either the bottom 28% or 41% of American families combined (depending on how it is counted).

Of course historically low corporate tax levels have nothing to do with this, right?

Although taxes paid by corporations, measured as a share of the economy, rose modestly during the boom years of the 1990s, they remained sharply lower even in the boom years than in previous decades. According to OMB historical data, corporate taxes averaged 2 percent of GDP in the 1990s. That represented only about two-fifths of their share of GDP in the 1950s, half of their share in the 1960s, and three-quarters of their share in the 1970s.

The share that corporate tax revenues comprise of total federal tax revenues also has collapsed, falling from an average of 28 percent of federal revenues in the 1950s and 21 percent in the 1960s to an average of about 10 percent since the 1980s.

The effective corporate tax rate — that is, the percentage of corporate profits that is paid in federal corporate income taxes — has followed a similar pattern. During the 1990s, corporations as a group paid an average of 25.3 percent of their profits in federal corporate income taxes, according to new Congressional Research Service estimates. By contrast, they paid more than 49 percent in the 1950s, 38 percent in the 1960s, and 33 percent in the 1970s.

So it it any wonder that the US is at best standing still, and more likely moving backwards when it comes to national infrastructure spending? And guess where the money goes?


By r_naked • Score: 3 • Thread


With that said, no, it isn't going to create anymore of a divide than already exists. I have Brighthouse Cable, and I can get their 90mb plan for around $80/mo, but I am sticking with their 30mb plan that is bundled with their basic HD plan. Why? I used mrtg to monitor my usage and found that I wasn't taking advantage of the extra bandwidth. We (at least in the US) have no services that take advantage of the extra bandwidth. I can stream Netflix, Amazon, etc... in HD just fine. Granted, their idea of HD sucks, but that isn't the point. Before the MPAA found out about USENET (and I still want to find out who talked -- and beat them), I more than took advantage of the extra bandwidth, but now that USENET is gone (well, so neutered as to be useless for my purposes), I never find myself "waiting".

  Now, what we need is more UPSTREAM bandwidth. I get 5mb up, and that is usable, but having 30/30 would be REAL nice.

With all that said, this is obviously *MY* use case scenario. I would love to hear from others in the US that need more than 30mb, and what you use it for / how you use it.


By Bengie • Score: 5, Interesting • Thread
With current technology, a single strand of fiber can handle the entire world's Internet bandwidth. Statistical multiplexing works best with large amounts of traffic, something a fiber consolidator can easily do, but copper cannot. I would rather have a 1gb fiber connection to chassis with 2,000 other customer, a 3tb/s backplane, and 1tb/s of uplink, than a 1gb coax connection with 5gb shared among 100 people, to a node that has 800 people and 20gb of uplink.

Going fiber essentially removes all choke points from the last mile, completely gets rid of the middle mile, and lets customer plug directly into the trunk. Then it's just a matter of sizing the trunk. It doesn't matter how shared it is as long as there is no congestion.


By TubeSteak • Score: 4, Informative • Thread

Not having the virtually unlimited bandwidth of all-fiber networks means that, for these populations, many activities are simply not possible. For example, broadband provided over all-fiber networks brings education, healthcare, and other social goods into the home through immersive, innovative applications and services that are impossible without it.

I think this point requires further explaining.
Why exactly do I need Gbit service to bring healthcare into my home?

Alternatives to fiber, such as cable (DOCSYS 3.0), are not enough, and they could be more expensive in the long run. The maximum speed a DOCSYS modem can achieve is 171/122 Mbit/s (using four channels), just a fraction the 273 Gbit/s (per channel) already reached on fiber.


DOCSIS 3.0 does not have a maximum limit on the number of channels that can be bonded.
The initial hardware would only bond up to 8 channels (~304 Mbit/s), but 16 channel (608 Mbit/s) hardware is already being rolled out by Comcast in the form of rebadged Cisco DPC3939 Gateways.

2015/2016 we might see 24 channel (912 Mbit/s) and 32 channel (1.2 Gbit/s) hardware.
2016/2017 is most likely, in the form of DOCSIS 3.1 modems, which use completely different modulation, but will have 24/32 channel DOCSIS 3.0 baked into them so that the ISPs can seamlessly upgrade from DOCSIS 3.0 to 3.1.

Cable's game plan is to use DOCSIS 3.1 to put off pulling fiber to the home, which keeps their costs low and will allow them to offer (multi)gigabit speeds using a hybrid fiber/co-ax infrastructure.

Oldest Human Genome Reveals When Our Ancestors Mixed With Neanderthals

Posted by samzenpus in Science • View
sciencehabit writes DNA recovered from a femur bone in Siberia belongs to a man who lived 45,000 years ago, according to a new study. His DNA was so well preserved that scientists were able to sequence his entire genome, making his the oldest complete modern human genome on record. Like present-day Europeans and Asians, the man has about 2% Neanderthal DNA. But his Neanderthal genes are clumped together in long strings, as opposed to chopped up into fragments, indicating that he lived not long after the two groups swapped genetic material. The man likely lived 7000 to 13,000 years after modern humans and Neanderthals mated, dating the mixing to 52,000 to 58,000 years ago, the researchers conclude. That's a much smaller window than the previous best estimate of 37,000 to 86,000 years ago.

Re:Question for sequencing expert.

By kinko • Score: 4, Informative • Thread

I don't know if ancient samples are processed differently, but for 'fresh' samples, the DNA gets broken up into small fragments (200-1000 base-pairs long), and then these fragments get sequenced. All bits of the genome have roughly even chance of getting sequenced, and with thousands or millions of copies of each fragment, you normally get reasonably even coverage over the whole genome.

The problem is when you map your sequences back onto a reference genome (ie the currently known chr1, chr2, chrX, etc). The aligning software will have trouble deciding where to place a fragment that is part of a highly repetitive sequence (like centromeres or telomeres) , or is duplicated several/many times (eg large gene families that have large sections of the genes in common, or pseudogenes that look like copies of other genes). In addition, we don't even know the exact sequence for some of these regions, so our reference human genome is contantly being updated (currently up to version 38).

For bioinformatics analysis, sometimes it is easier to sweep some of this under the rug. For example, some people use a reference genome that masks out the centromeres and telomeres (ie our reference sequence just has NNNNNNNNNNNN bases here, instead of As,Cs,Gs and Ts). Otherwise there are databases that list the regions containing repeated sequences or duplicated segments, so you can check any of your findings to make sure they aren't in a suspicious region.

Re:Yeah but ...

By Doubting Sapien • Score: 5, Insightful • Thread
Mod parent up. The article as written is dumbed down and misleading in many ways. Against my usual temperament I'm going to make a sociological/anthropological argument that someone reading the article will draw very wrong conclusions about the nature of prehistoric Neanderthal-modern human interaction. Genetic inheritance or progeny happens to be the only evidence we have right now about early Neanderthal-modern human interaction. But it does not say anything useful about when we "first had sex with" them as the article claims. Consider the following: Archaeological evidence suggests that large scale violence we would consider warfare was a part of human life as far as 7,500 or possibly 14,000 years ago. Does that mean ancient society was all about peace and love before that time? No. There is too little information to make such sweeping conclusions. To return to the subject at hand, not all sexual encounters with Neanderthals are going to leave evidence for us to conveniently find. What we *DO* know at this point is that at least one such encounter resulted in a pregnancy that was carried to term and the resulting offspring lived long enough to have children of his/her own who continued to survive. That's ALL we know. Put another way, imagine the young men and women of ancient communities playing a game of "fuck, marry, or kill" that included their funny looking neighbors. The visual may not be pleasant, but any earlier incidents of war-rape and deliberate infanticide due to parental rejection will leave little to no evidence behind for us. And barring extreme luck, there is almost NO WAY we can know if/when such incidents occurred. Who really knows when Neanderthals and us *FIRST* had sex?

Re: Exinction

By jc42 • Score: 4, Insightful • Thread

My guess is that the fact that no organisms exist with a Neanderthal genome defines them as extinct. Where one draws the line is more art than science I guess ... I know that there are some genetics in us (like the HMG group of proteins) that are ancient, but work so well that we still retain them. That doesn't mean the first species to have evolved them isn't extinct, it just means we evolved from them.

Well, I don't think that quite matches the scientific concept of "species". By your definition, almost all species who were alive 50,000 years ago would be considered extinct, but hardly any biologists would agree with that. It's true that no humans alive today have 100% Neanderthal genes, but it's also nearly certain that there are no living humans with 100% Cro-Magnon genes, either. What happened would be considered a mixing of several human sub-species after migrations of one or more African groups into Eurasia. The Cro-Magnon sub-species disappeared, too, and modern human Caucasian and Asian sub-species are the results of that mixing. This sort of thing happens in species all the time, when conditions allow such genetic mixing, and the result is rarely considered a new species.

The fact is that modern humans are all one species. We can and do interbreed when groups mingle, and there are no groups of modern humans that are genetically incompatible. If sub-species "disappear" by genetic mixing, that is usually not called an extinction event. It's just the routine and normal mingling of subspecies.

An interesting contrast is that most North American duck species are known to hybridize occasionally, and the offspring are usually fertile. Does this mean they're really all one species? No, because they all mingle a lot, but interbreeding is rare. They have "behavioral" species-separation features, mostly based on female mate choice. The females are mostly all mottled brown (protective coloring), and the males often approach females of other species (because they can't tell them apart either ;-). But the females usually only accept males that have the "right" color markings; the others are ugly to them. This suffices to keep the species separate, though there is probably a very low level of genetic interchange between many of the species.

But humans aren't like this. Even if we do generally prefer mates in our own subspecies, most of us do find many members of other subspecies physically attractive, and we'll mate with them given the opportunity. This means that we really are all the same species. We now have good evidence that the Neandertals were merely another subspecies, because when they had the opportunity, they did interbreed with those slender, dark-skinned folks who migrated into their territory. They did so often enough to produce a new subspecies that's physically distinct from either of the earlier two (or three or more).


By Truth_Quark • Score: 3 • Thread
I think that some of those Africans look a little bit more Homo Sapien than Europeans who have the Neanderthal Genes.

A little bit more upright, less stooped, a little bit less hairy, a little mound of forebrain in their foreheads.

There's a lot of genetic variation in Africa by comparison though. I'm thinking of those tall, really black-skinned, Sudanese looking people.

52,000 to 58,000 years ago?

By Vinegar Joe • Score: 4, Insightful • Thread

Strangely enough, beer was invented 57,999 years ago.

Two Exocomet Families Found Around Baby Star System

Posted by samzenpus in Science • View
astroengine writes Scientists have found two families of comets in the developing Beta Pictoris star system, located about 64 million light-years from Earth, including one group that appears to be remnants of a smashed-up protoplanet. The discovery bolsters our theoretical understanding of the violent processes that led to the formation of Earth and the other terrestrial planets in the solar system. "If you look back at the solar system when it was only 22 million years old, you might have seen phenomena that's a like more like what's happening in Beta Pic," astrophysicist Aki Roberge, with NASA Goddard Space Flight Center, Greenbelt, Md., told Discovery News.

Wrong distance away

By dunkindave • Score: 5, Informative • Thread
Beta Pictoris is 63.4 light years away, not 64 million light years. 64 million light years would be at the other end of the galaxy and probably not even observable. When the article gets basic facts wrong I stop reading.

Re:Wrong distance away

By ganv • Score: 4, Informative • Thread
That error jumped out to me also. Its like describing a city 93 miles away and instead saying it is 93 million miles away which instead of being 1.5 hour drive is all the way to the sun. It is really useful to get a cosmic distance scale in your head: billions of light years is the size of the visible universe, millions of light years are distances to nearby galaxies. 30,000 light years is the distance to the center of our galaxy. 4 light years is the distance to the nearest stars.

Re:Wrong distance away

By Shavano • Score: 5, Informative • Thread

The linked article got it wrong, which is why the summary is wrong. As usual, the linked article is garbage and you have to dig into links you find there to get something close to reality.

Reasonably well written summary here:
Research here:

Will the Google Car Turn Out To Be the Apple Newton of Automobiles?

Posted by samzenpus in Technology • View
An anonymous reader writes The better question may be whether it will ever be ready for the road at all? The car has fewer capabilities than most people seem to be aware of. The notion that it will be widely available any time soon is a stretch. From the article: "Noting that the Google car might not be able to handle an unmapped traffic light might sound like a cynical game of 'gotcha.' But MIT roboticist John Leonard says it goes to the heart of why the Google car project is so daunting. 'While the probability of a single driver encountering a newly installed traffic light is very low, the probability of at least one driver encountering one on a given day is very high,' Leonard says. The list of these 'rare' events is practically endless, said Leonard, who does not expect a full self-driving car in his lifetime (he’s 49)."

Re:How hard is it to recognize a stoplight?

By cheater512 • Score: 4, Interesting • Thread

If the cars that fall back to AI then communicate their observations and decisions back to Google then to other cars then the next car wouldn't need AI and could improve knowledge of the area, plus any particularly bad problem spots can be highlighted for further investigation at Google HQ.

Normal drivers don't have LIDAR. I assume it is a massive assistance for some aspects of Google's work.

Re:How hard is it to recognize a stoplight?

By TheGavster • Score: 5, Insightful • Thread

I think the real goal would be to have all vehicles self-drive; then they can be coordinated to interlace at intersections, removing the need for stop lights and saving a ton of fuel!

Re:Pre-mapped environments are a dead end

By mjwx • Score: 4, Interesting • Thread

Someone(s) at Google didn't think this one through.

I think quite a few people at Google have thought about that, came to the same conclusion as you and started working on the problem.

The thing that people dont get is that it will take years, if not decades to get fully autonomous cars onto the road. They aren't due out in 2018 and yes we know what models are coming in 2018, an updated 370z, a new NSX and a few others no-one has any interest in.

The first autonomous cars wont be by Google, in fact I doubt there will be a Google car, the first autonomous cars will be Merc's or Toyotas built using Googles technologies and the autonomous part will only work on specially outfitted roads (and they will be controlled, limited access roads at first) so you'll still be required to drive a car. In fact you probably wont see a car without a steering wheel or other controls in your lifetime.

You're quite right that roads will need to be upgraded to provide telemetry to autonomous cars, and this will happen gradually over many, many decades.

Re:How hard is it to recognize a stoplight?

By phantomfive • Score: 4, Interesting • Thread

Google's SDC has been tested thousands of times with a huge range of pedestrian scenarios. It may not be better than an alert and primed human, but it is almost certainly better than an average human,

I'd really be interested if you have a reference for this. Even if your reference is just a Google PR person, that's still better than nothing.

Re:Rain and snow?

By Type44Q • Score: 4, Funny • Thread

Sure performance will be degraded in bad weather, and the car will have to slow down to compensate. Which is exactly what humans do.

Considering that Texans and Okies tend to speed up when the streets are slippery and visibility's been reduced, I suspect this confirms my suspicion: hicks aren't human! :p

Michigan Latest State To Ban Direct Tesla Sales

Posted by samzenpus in Politics • View
An anonymous reader writes As many expected, Michigan Governor Michigan Governor Rick Snyder signed a bill that bans Tesla Motors from selling cars directly to buyers online in the state. When asked what Tesla's next step will be, Diarmuid O'Connell, vice president of business development, said it was unclear if the company would file a lawsuit. "We do take at their word the representations from the governor that he supports a robust debate in the upcoming session," O'Connell said. "We've entered an era where you can buy products and services with much greater value than a car by going online."

Re:Tesla wasn't the target, it was China

By starless • Score: 4, Insightful • Thread

I can't justify two cars, and if I own a car, it has to be able to drive 1000 miles in a day.

If you routinely have to drive so far then an electric wouldn't work for you.
However, if driving long distances is rare then an electric plus occasional rental (e.g. zip) ought to work.

Here you go:

By Anonymous Coward • Score: 5, Informative • Thread


By JustNiz • Score: 5, Informative • Thread

"Mr. Musk is a brilliant man, and Tesla is an innovative company. We can all respect that," says Jim Appleton, the president of the New Jersey Coalition of Automobile Retails. "But he doesnâ(TM)t get what it takes to do business in New Jersey."

Translation: Musk won't pay off all the useless parasites represented by Jim Appleton and all the corrupt government officials like Governor Rick Snyder the required under-the-table money to do business in their state.

So much for a free market

By Lucas123 • Score: 5, Informative • Thread

These outdated statues were originally designed to protect little dealerships from the threat of big auto opening their own dealerships if one of their indirect dealers refused to carry their lemons. So dealers under pressure from Detroit were forced to sell the crappy next to the good cars.

Today, prohibiting direct sales protects only the dealerships and harms the consumer. There’s no reason to prohibit a consumer from buying directly from the manufacturer.

interstate commerce?

By markhahn • Score: 3 • Thread

I don't understand why this silliness isn't being slapped down by the feds.

BitTorrent Performance Test: Sync Is Faster Than Google Drive, OneDrive, Dropbox

Posted by timothy in Hardware • View
An anonymous reader writes Now that its file synchronization tool has received a few updates, BitTorrent is going on the offensive against cloud-based storage services by showing off just how fast BitTorrent Sync can be. More specifically, the company conducted a test that shows Sync destroys Google Drive, Microsoft's OneDrive, and Dropbox. The company transferred a 1.36 GB MP4 video clip between two Apple MacBook Pros using two Apple Thunderbolt to Gigabit Ethernet Adapters, the site as a real-time clock, and the Internet connection at its headquarters (1 Gbps up/down). The timer started when the file transfer was initiated and then stopped once the file was fully synced and downloaded onto the receiving machine. Sync performed 8x faster than Google Drive, 11x faster than OneDrive, and 16x faster than Dropbox.

Re:Comparing LAN to WAN Speeds

By AceJohnny • Score: 4, Interesting • Thread

Actually, while they indeed compared two computers on the same LAN, they also included a computer on the internet. Furthermore, One of Dropbox's touted features is that it's able to detect and use peers on a LAN to avoid the unneecssary round trip through the cloud. I don't know about Google Drive, but judging by the results I suspect they can do the same.

And, more importantly, they compared the other clients on the same setup.

How you got modded "+4 insightful" is beyond me.


By Bengie • Score: 5, Informative • Thread
They are not designed to not use all of your bandwidth, it's that they can't. I've tested DropBox, and it breaks up the file into chunks and uploads them synchronously using REST calls. This meant my connection was constantly bouncing between 0% and 100%, causing bursts of packet-loss because it never gave TCP enough time to level out. BitTorrent on the other hand is great at not hosing my connection. I can run it near 100% and it will back-off as it detects latency going up, preempting the need for packet-loss to signal congestion.

Re:Comparing LAN to WAN Speeds

By MatthiasF • Score: 5, Informative • Thread
Maybe because 3-4 people actually read the Sync blog post where it states, and I quote:

"Our tests were conducted over local LAN – on the same switch – in order to rule out available bandwidth as a limiting factor. It’s important here to note that Dropbox, Google Drive and Microsoft OneDrive all rate-limit uploads and do not fully utilize the 1 Gbps bandwidth available (in regards to the office Internet connection, not the LAN switched). We’re confident that a slower Internet connection would yield similar results."

In other words, people agreed with me because they knew what I said to be true.

Not only did they give themselves the preferential treatment of same LAN, they also intentionally adjusted their tests to discount an advantage of a competitor. Again, quoted verbatum from the blog post:

"Dropbox has a deduplication scheme in place – what this meant for our tests is that even though we deleted the video file from our Dropbox folder, traces of it still remained and Dropbox got ~50% faster at transferring the same video file each subsequent time we uploaded it. To correct for this, we needed a new file that wasn’t bit-for-bit identical to the video file we previously transferred. "

Why don't you RTFA.


By Bengie • Score: 5, Informative • Thread
I have a 50/50 dedicated fiber connection with a rock solid 0.35ms ping to my ISP and a solid 8ms ping to drop box servers. Why is my connection only doing 10mb/s with DropBox and getting packet-loss, while I can use BitTorrent at 45/45 up & down at the same time and not have loss or latency? DropBox seems to have the bandwidth, but the quick bursts are wrecking havoc with my ISP's traffic shaping via their Cisco router. The way Cisco is calculating the mb/s seems to be via some sliding window, which allows a quick spike of a burst to happen in the first 1/4 of a second, but then clamps down. Because my network latency is so low, the TCP stream can ramp up really fast. Once the Cisco router clamps down on the connection, I'm already uploading nearly 100mb/s and TCP can't back off in time before loss happens.

The reason loss occurs after the clamping is because my ISP uses small buffers. They don't like buffer bloat. My max latency to my ISP before loss starts to occur is about 10ms. Since the connection is dedicated, and their trunk is sized about 3x more than peak bandwidth, it's normally not an issue.

This wouldn't be an issue if DropBox transferred the data as a single stream, but instead does a very jarring start/stop cycle, which causes my bandwidth to get very spiky. I'm thinking of enabling traffic shaping on my PFSense box, but I really don't feel like messing with it quite yet.

I guess the actual problem really is the Cisco router, but DropBox is still incredibly slow.

Re:Is it open source yet?

By Junta • Score: 4, Interesting • Thread

It's not that difficult. But after setting it up for a group of people and then setting up seafile, I prefer seafile. If you aren't an admin user in owncloud, things are pretty tough when it comes to knowing what groups you are in and what groups can be shared with and such. seafile does a much better job on that front.

Plus the owncloud sync client doesn't seem very good. And the mobile platform clients cost money where seafile is free.

ownCloud might have gotten the 'good name', but they don't have the best implementation sadly.

Deutsche Telecom Upgrades T-Mobile 2G Encryption In US

Posted by timothy in Management • View
An anonymous reader writes T-Mobile, a major wireless carrier in the U.S. and subsidiary of German Deutsche Telecom, is hardening the encryption on its 2G cellular network in the U.S., reports the Washington Post. According to Cisco, 2G cellular calls still account for 13% of calls in the US and 68% of wireless calls worldwide. T-Mobile's upgrades will bring the encryption of older and inexpensive 2G GSM phone signals in the US up to par with that of more expensive 3G and 4G handsets. Parent company Deutsche Telecom had announced a similar upgrade of its German 2G network after last year's revelations of NSA surveillance. 2G is still important not only for that 13 percent of calls, but because lots of connected devices rely on it, or will, even while the 2G clock is ticking. The "internet of things" focuses on cheap and ubiquitous, and in the U.S. that still means 2G, but lots of things that might be connected that way are ones you'd like to be encrypted.

How's this affect StingRay(tm)s

By cant_get_a_good_nick • Score: 3 • Thread

Obligatory Ars Link. From what I understand, fake towers work by forcing you to downgrade to 2G. Will this obviate that risk?

The Classic Control Panel In Windows May Be Gone

Posted by timothy in Technology • View
jones_supa writes In Windows 8, there was an arrangement of two settings applications: the Control Panel for the desktop and the PC Settings app in the Modern UI side. With Windows 10, having the two different applications has started to look even more awkward, which has been voiced loud and clear in the feedback too. Thus, the work at Microsoft to unify the settings programs has begun. The traditional Control Panel is being transformed to something temporarily called "zPC Settings" (sic), which is a Modern UI app that melts together the current two settings applications.

Re:Please Microsoft...

By PRMan • Score: 5, Funny • Thread
Even worse. I RDP'ed into a Server the other day and nobody, even those running Windows 8 on their laptops could figure out how to do a "log out" on the server.

Re:Please Microsoft...

By FSWKU • Score: 5, Insightful • Thread

Maybe I am wrong, but over the years I have noted an increasing condescendension of IT people over "mere users". I wonder why that is. Bear in mind that IT typically isn't the company's cash cow, but "overhead", making this condescension rather inappropriate imho. Even on /. there are many "users" that are no IT people: designers, programmers, etc. I wonder why the interface they are using is apparently less important than computer maintenance software, or any other user experience, for that matter.

The "overhead" designation is precisely the reason IT people tend to hate users (at least in my experience). The end-user sees the IT person as nothing more than an electronic janitor who's sole purpose is to clean up the messes that they, the user, were too careless or too inept to prevent from happening in the first place. Thus, they don't bother to learn how to do things properly, they don't learn how to keep from getting a virus, they don't learn how to do even the simplest of things because "That's IT's job. I shouldn't have to know computers!" No, they don't have to know the ins and outs of every modern OS, but they should know how to at least keep it from obliterating everything they're working on (meaning stop clicking "OK" on every damn thing that pops up!). Then to top it all off, they behave as if security policies, best practices, etc. don't apply to them, even though management approved them as being a site-wide mandate...

So in short, users see IT as "the help" and treat them as such. And much like a janitor who is constantly cleaning up after idiots who have no concern for anyone other than themselves, the IT worker learns to hate certain users because they seem to have a mission in life to make IT's job as miserable as possible.

Why a GUI? Well, back in the day....

By MasterOfGoingFaster • Score: 5, Informative • Thread

That's one thing I never understood, why Microsoft went GUI with the servers like they did, other than to know that they sold a lot of server OSes to people that had no business running servers in the first place...

Simple. Most business people had been exposed to DOS, then moved to Windows and found it much easier to use and understand. The Novell guy comes in and tries to sell a Netware server. Yep - looks like DOS. I came in with a Windows server. Looks just like his PC. He sees File Manager, drive letters, Notepad, Paint, and suddenly he feels like this is the more advanced system, and he is far more comfortable with it.

A lot of the Netware guys around my area were extremely arrogant, and treated their customers like crap. Once they got a server installed, the customer was clueless and the vendor would abuse that. Our business model was to be open with the system and point out that we can easily be replaced, keeping us focused on their satisfaction. With NT Advanced Server (the correct name), the business owner could actually watch us and understand what we were doing with his system. We replaced a fair amount of Netware servers in those days. And you can see who won.

Screw this. I'm sticking with Windows 9.

By mmell • Score: 3 • Thread


By sexconker • Score: 5, Insightful • Thread

So that multiple programs can share the same settings system-wide. The worst thing about Linux is that every program works in a different non-standard way.

Like putting system config in /etc and user config in $HOME, you mean?

Like putting system config in HKEY_LOCAL_MACHINE and user config in HKEY_USERS, you mean?
The registry is trash, but so is a mish-mash of non-standard textfiles strewn about.

The problem that's specific to Windows is that programs can decide to use the registry, text files, or both, and when they use text files they can be in my documents, (which is now a library with no fixed location), the program's installation folder, the system-wide application data folder, or a user-specific application data folder. When using an application data folder, you have the choice of using Local, LocalLow, or Roaming. No one in the world understands the difference between these folders or why some programs use one over another (or use multiple!).

It wouldn't be a problem if everything was relegated to living in one of:
A: The application's install directory
B: A single directory (one per application) in a specific user/system directory (or both)
C: Living in the registry (again, user/system/both as appropriate)

But when applications can choose A, B, C, A & B, A & C, B & C, or A, B & C it's a fucking nightmare.

Judge Says EA Battlefield 4 Execs Engaged In "Puffery," Not Fraud

Posted by timothy in YRO • View
DemonOnIce writes with a story, as reported by Ars Technica, that a federal judge in San Francisco has dismissed a proposed securities fraud class action lawsuit connected to Battlefield 4's bungled rollout. From the report: EA and several top executives were sued in December and were accused of duping investors with their public statements and concealing issues with the first-person shooter game. The suit claimed executives were painting too rosy of a picture surrounding what ultimately would be Battlefield 4's disastrous debut on various gaming consoles beginning last October, including the next-generation Xbox One. But US District Judge Susan Illston of San Francisco said their comments about EA and the first-person shooter game were essentially protected corporate speak. "The Court agrees with defendants that all of the purported misstatements are inactionable statements of opinion, corporate optimism, or puffery," Illston ruled Monday.


By Dishevel • Score: 4, Funny • Thread
Timmothy has been around far too long here to be expected to do his job.


By idontgno • Score: 4, Funny • Thread

It's official. Doublespeak is now codified in law.

It's only fair. Law is encoded in Doublespeak.

I didn't lie, I just gave false statement

By gurps_npc • Score: 3 • Thread
Wow, the ability to come up with "he did it, but it' wasn't bad enough to warrant legal action" excuses has had a huge renaissance.

Ah Investors, I Feel Your Pain

By Greyfox • Score: 3 • Thread
For I too have, far too often, put some money into something EA said would be awesome and it turned out to be a pile of crap. With time, you'll learn to be suspicious of anything EA says. Next year when they do "Restockening, the Sequil", I assure you that it will suck every bit as much as the original did. If you wait a few months before buying, you might be able to pick their stock up cheap (or possibly even free) during a Steam sale. That's just how you need to play the game, if you don't want to waste your money.

Yet it's still unplayable....

By Lumpy • Score: 3 • Thread

The game is still a steaming pile of crap, I still warn people away from it and anything else that comes from that franchise ever again.

Microsoft,, Oracle Latest To Be Sued Over No-Poach Deal

Posted by timothy in YRO • View
itwbennett (1594911) writes Oracle, Microsoft and are facing suits alleging that they conspired to restrict hiring of staff. The suits appear to refer to a memo that names a large number of companies that allegedly had special arrangements with Google to prevent poaching of staff and was filed as an exhibit on May 17, 2013 in another class action suit over hiring practices. The former employees filing lawsuits against Microsoft, and Oracle have asked that the cases be assigned to Judge Koh as there were similarities with the case against Google, Apple and others — and it maybe doesn't hurt that Judge Koh thought the $324.5 million settlement in that case was too low.

Management only

By Curunir_wolf • Score: 4, Insightful • Thread
The memo only talks about executives and product managers. Engineers (at ANY level) are explicitly excluded from the agreement (that is, they can be recruited at will), as well as any product "contributors".

Class warfare

By meta-monkey • Score: 5, Insightful • Thread

This is why I'm opposed to all those "learn to code" programs Zuck and friends keep hyping. The people at the top of the tech industry are not content with their billions. They want your thousands, too. There is a concerted effort under way to push your wages down, take that money and throw it onto their own already huge piles. No poaching deals. H1B visas. "STEM shortage," "coder shortage" bullshit. It's all part of the same offensive. It is class warfare and their class is winning.

The settlement was too low.

By rahvin112 • Score: 4, Interesting • Thread

The settlement offer the lawyers wanted to take was WAY too low. After the agreement collapsed Google alone had to give their entire staff a $10k year raise, and they think less than $5k per person for multiple years is sufficient? Everyone should be getting $10k per year minimum. Lawyer fees should be capped and be above and beyond payment to the class holders. Only if these companies have to give every employee affected by this $50 or $100K in damages will this set a precedent that will prevent future abuses.

Re:wait a second...

By frank_adrian314159 • Score: 5, Funny • Thread

Their "toolbar" hides in Oracle's installer for Java. The parasite... nay, symbiote, uses this installer as a vector to infect unsuspecting computers, the end result being the madness of innocent system administrators and dragooned relatives helping Grandma figure out why her system is so slow because she hasn't sprung for new hardware since the mid-Nineteen-Fucking-Nineties and it's a GODDAMN Windows Machine And... MOTHER OF GOD! I don't believe this! It's XP and it Has Every Piece of Malware Since the DAWN OF TIME INSTALLED ON IT AND I HAVE TO CLEAN IT ALL OFF BECAUSE SHE COULDN'T LOSE THE MOTHERFUCKING CAT VIDEO HER &^!!%(*!&$!&^*$#! FRIEND CHARLENE SENT HER AND THE SENILE OLD BIDDY CAN'T REMEMBER... uh, where she put it... ahem, um sorry, where was I? Oh, yeah...

I've seen it far too many times for it to be a phantom. A zombie, perhaps, shambling along on toolbar installations by those too green or momentarily distracted or forgetful... So, even if it is dead, it lives! IT LIVES!

And everyone in one of these professions was hurt

By bigpat • Score: 4, Informative • Thread
The class should be expanded to cover everyone in the profession not just employees of the companies. Many more people were damaged by this illegal conspiracy because these companies were in large part influencing the setting of wages for the industry. By illegally restraining trade they illegally depressed salaries for the entire market.

6,000 Year Old Temple Unearthed In Ukraine

Posted by timothy in Science • View
An anonymous reader writes A massive archaeological dig of an ancient Ukrainian village first begun in 2009 has yielded a discovery that I sort of hope ends up inspiring a video game: a massive, scary-sounding temple. From the article: "Inside the temple, archaeologists found the remains of eight clay platforms, which may have been used as altars, the finds suggested. A platform on the upper floor contains "numerous burnt bones of lamb, associated with sacrifice," write Burdo and Videiko, of the Institute of Archaeology of the National Academy of Sciences of Ukraine. The floors and walls of all five rooms on the upper floor were "decorated by red paint, which created [a] ceremonial atmosphere."
Maybe this is what Putin has been after.

Re:6,000 Year Old Temple Unearthed In Ukraine

By magarity • Score: 4, Informative • Thread

You seem completely confused about military behavior by national demographics. You're thinking of either Muslims or Southeast Asian communists. Russians, on the other hand, have a distinct reverence for history. For example, when the Bolsheveks took St Petersburg they rather famously protected the Winter Palace and the Hermitage from any kind of vandalism.

Re:6,000 Year Old Temple Unearthed In Ukraine

By cusco • Score: 4, Interesting • Thread

No, he's thinking of the US military's insistence on bulldozing an airfield well into the protected archeological zone of Babylon, destroying (IIRC) an unexcavated mound and a minor temple in the process. (Then to add insult to injury, they abandoned the project because it wasn't needed, something they were told well before starting work.).

Re:6,000 Year Old Temple Unearthed In Ukraine

By cptdondo • Score: 5, Insightful • Thread

And you must not be thinking of the Russians famously defecating in the hallways of the Czechoslovak National Museum after ransacking it and destroying what they could not steal in 1967.

Tell me about reverence by the Russians for anything other than vodka.

Re:Why scary?

By PPH • Score: 5, Funny • Thread

undergound, recently excavated and features sacrificial altars

In the distant future, archeologists will unearth the food court on the lower level of our local mall and discover the altar upon which thousands of chickens were sacrificed to the god Colonel Sanders.

Re:6,000 Year Old Temple Unearthed In Ukraine

By ShanghaiBill • Score: 4, Insightful • Thread

The difference, of course, is that Czechoslovakia was never a part of Russia in any way, shape or form

Except for the thousands of square miles of Czech territory annexed by the Soviet Union in 1945.

FTDI Reportedly Bricking Devices Using Competitors' Chips.

Posted by Soulskill in Hardware • View
janoc writes It seems that chipmaker FTDI has started an outright war on cloners of their popular USB bridge chips. At first the clones stopped working with the official drivers, and now they are being intentionally bricked, rendering the device useless. The problem? These chips are incredibly popular and used in many consumer products. Are you sure yours doesn't contain a counterfeit one before you plug it in? Hackaday says, "It’s very hard to tell the difference between the real and fake versions by looking at the package, but a look at the silicon reveals vast differences. The new driver for the FT232 exploits these differences, reprogramming it so it won’t work with existing drivers. It’s a bold strategy to cut down on silicon counterfeiters on the part of FTDI. A reasonable company would go after the manufacturers of fake chips, not the consumers who are most likely unaware they have a fake chip."

Re:On the other hand...

By suutar • Score: 4, Insightful • Thread

Fake chips are a problem. Bricking equipment that includes fake chips is also a problem.

Re:This might have been incompetence, not malice

By Slashdot Parent • Score: 5, Insightful • Thread

Except the chip wasn't, as you put it, "killed." The chip is still fully functional with a driver that will support it.

The chip was pretty killed. With a PID of 0, Windows, Mac OS, and Linux wouldn't recognize it. It's theoretically possible to fix the PID, but most end users wouldn't really know how to do that.

Why should FTDI support chips it didn't make?

They shouldn't have to support chips that they didn't make, but at the same time, they shouldn't brick* chips that they didn't manufacture.

What FTDI really should have done is to set a generic PID for the chip type. That way, the chip would no longer use the FTDI driver, and they wouldn't have to support it.

*I use "brick" in the sense that using their Windows driver to set the PID to 0 makes the chip no longer function in other OSs, either. I am aware that an unbricking procedure is available.

Re:On the other hand...

By lgw • Score: 4, Funny • Thread

Sony was slapped with a fine so large the shareholders winced. The CEO resigned. The DoJ said they got the benefit of the doubt that the effect on government computers was unintended, but if Sony didn't learn the DoJ would simply ... end ... Sony America.


By citizenr • Score: 3 • Thread

and here we have very first attack of BadUsb. Computer malware infecting and destroying USB connected peripherals, possible because USB device had no firmware signing/authentication and was build to let anyone update it.

Re:The good news

By ChumpusRex2003 • Score: 5, Informative • Thread
Yes. A company called Supereal is selling enormous volumes of "FTDI" chips into the Chinese market. The chips are labelled with the FTDI name and logo and during the USB negotiation, they announce themselves using the FTDI vendor unique ID, in order to use the ubiquitous and flexible FTDI driver (rather than require any development work for their own driver).

See for an example of a fake chip - labelled FTDI on the outside, but supereal on the silicon.

The problem is that the fake chips are buggy and slow compared to the genuine article, causing headaches for USB peripheral designers and support and reputation headaches for FTDI. There is a huge market for USB UART chips, and it is quite competitive, but few of the products on the market are actually as reliable, fast and robust as you would expect them to be. The FTDI FT232RL is one of the best in terms of reliability and has the best drivers, while also providing some handy bonus functionality.

It appears that FTDI have reverse engineered the fake chips and found that they can be reprogrammed. When their driver detects a fake chip, it uses the internal configuration commands to erase the EEPROM memory containing the Vendor Unique ID. With this EEPROM blanked, the chip is unable to complete the device detection process in the OS's USB stack.