Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Government

Researchers Ask Federal Court To Unseal Years of Surveillance Records (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Two lawyers and legal researchers based at Stanford University have formally asked a federal court in San Francisco to unseal numerous records of surveillance-related cases, as a way to better understand how authorities seek such powers from judges. This courthouse is responsible for the entire Northern District of California, which includes the region where tech companies such as Twitter, Apple, and Google, are based. According to the petition, Jennifer Granick and Riana Pfefferkorn were partly inspired by a number of high-profile privacy cases that have unfolded in recent years, ranging from Lavabit to Apple's battle with the Department of Justice. In their 45-page petition, they specifically say that they don't need all sealed surveillance records, simply those that should have been unsealed -- which, unfortunately, doesn't always happen automatically. The researchers wrote in their Wednesday filing: "Most surveillance orders are sealed, however. Therefore, the public does not have a strong understanding of what technical assistance courts may order private entities to provide to law enforcement. There are at least 70 cases, many under seal, in which courts have mandated that Apple and Google unlock mobile phones and potentially many more. The Lavabit district court may not be the only court to have ordered companies to turn over private encryption keys to law enforcement based on novel interpretations of law. Courts today may be granting orders forcing private companies to turn on microphones or cameras in cars, laptops, mobile phones, smart TVs, or other audio- and video-enabled Internet-connected devices in order to conduct wiretapping or visual surveillance. This pervasive sealing cripples public discussion of whether these judicial orders are lawful and appropriate."
Yahoo!

Yahoo Open Sources a Deep Learning Model For Classifying Pornographic Images (venturebeat.com) 107

New submitter OWCareers writes: Yahoo today announced its latest open-source release: a model that can figure out if images are specifically pornographic in nature. The system uses a type of artificial intelligence called deep learning, which involves training artificial neural networks on lots of data (like dirty images) and getting them to make inferences about new data. The model that's now available on GitHub under a BSD 2-Clause license comes pre-trained, so users only have to fine-tune it if they so choose. The model works with the widely used Caffe open source deep learning framework. The team trained the model using its now open source CaffeOnSpark system.
The new model could be interesting to look at for developers maintaining applications like Instagram and Pinterest that are keen to minimize smut. Search engine operators like Google and Microsoft might also want to check out what's under the hood here.
The tool gives images a score between 0 to 1 on how NSFW the pictures look. The official blog post from Yahoo outlines several examples.
Chrome

Chromification Continues: Firefox May Use Chrome's PDF and Flash Plugins (softpedia.com) 102

An anonymous reader writes: Mozilla announced today Project Mortar, an initiative to explore the possibility of deploying alternative technologies in Firefox to replace its internal implementations. The project's first two goals are to test two Chrome plugins within the Firefox codebase. These are PDFium, the Chrome plugin for viewing PDF files, and Pepper Flash, Google's custom implementation of Adobe Flash. The decision comes as Mozilla is trying to cut down development costs, after Firefox took a nose dive in market share this year. "In order to enable stronger focus on advancing the Web and to reduce the complexity and long term maintenance cost of Firefox, and as part of our strategy to remove generic plugin support, we are launching Project Mortar," said Johnny Stenback, Senior Director Of Engineering at Mozilla Corporation. "Project Mortar seeks to reduce the time Mozilla spends on technologies that are required to provide a complete web browsing experience, but are not a core piece of the Web platform," Stenback adds. "We will be looking for opportunities to replace such technologies with other existing alternatives, including implementations by other browser vendors."
Android

Google Rebrands 'Apps for Work' To 'G Suite,' Adds New Features (thenextweb.com) 62

Google has renamed "Apps for Work" to "G Suite" to "help people everywhere work and innovate together, so businesses can move faster and go bigger." They have also added a bunch of new features, such as a "Quick Access" section for Google Drive for Android that uses machine learning to predict what files you're going to need when you open up the app, based off your previous behavior. Calendar will automatically pick times to set up meetings through the use of machine intelligence. Sheets is also using AI "to turn your layman English requests into formulas through its 'Explore' feature," reports The Next Web. "In Slides, Explore uses machine learning to dynamically suggest and apply design ideas, while in Docs, it will suggest backup research and images you can use in your musings, as well as help you insert files from your Drive account. Throughout Docs, Sheets, and Slides, you can now recover deleted files on Android from a new 'Trash' option in the side/hamburger menu." Google's cloud services will now fall under a new "Google Cloud" brand, which includes G Suite, Google Cloud Platform, new machine learning tools and APIs, and Google's various devices that access the cloud. Slashdot reader wjcofkc adds: I just received the following email from Google. When I saw the title, my first thought was that there was malware lying at the end -- further inspection proved it to be real. Is this the dumbest name change in the history of name changes? Google of all companies does not have to try so hard. "Hello Google Apps Customer, We created Google Apps to help people everywhere work and innovate together, so that your organization can move faster and achieve more. Today, we're introducing a new name that better reflects this mission: G Suite. Over the coming weeks, you'll see our new name and logo appear in familiar places, including the Admin console, Help Center, and on your invoice. G Suite is still the same all-in-one solution that you use every day, with the same powerful tools -- Gmail, Docs, Drive, and Calendar. Thanks for being part of the journey that led us to G Suite. We're always improving our technology so it learns and grows with your team. Visit our official blog post to learn more."
News

Slashdot Asks: The Washington Post Says It Publishes Something Every Minute -- How Much Is Too Much? (washingtonian.com) 87

Media outlets are increasingly vying for your attention. But they are also feeding Google's algorithm. Some of them churn hundreds of news articles every day, hoping to offer a diverse range of articles to their readers, and also increase their "search space." The Washington Post is currently running a promotional offer -- letting people get a six-month digital subscription for $10 (pretty good if you ask me). But the Washington Post also mentions that is now publishes a new piece of content every minute. That's like 1,440 articles, videos and other forms of content in one single day. This raises a question: how much content is too much content? How many stories can a person possibly find time to read in a day? Do you feel that perhaps outlets should cut down on the number of things they publish? Or are you happy with the way things are?
AI

Microsoft Forms New AI Research Group Led By Harry Shum (techcrunch.com) 43

An anonymous reader quotes a report from TechCrunch: A day after announcing a new artificial intelligence partnership with IBM, Google, Facebook and Amazon, Microsoft is upping the ante within its own walls. The tech giant announced that it is creating a new AI business unit, the Microsoft AI and Research Group, which will be led by Microsoft Research EVP Harry Shum. Shum will oversee 5,000 computer scientists, engineers and others who will all be "focused on the company's AI product efforts," the company said in an announcement. The unit will be working on all aspects of AI and how it will be applied at the company, covering agents, apps, services and infrastructure. Shum has been involved in some of Microsoft's biggest product efforts at the ground level of research, including the development of its Bing search engine, as well as in its efforts in computer vision and graphics: that is a mark of where Microsoft is placing its own priority for AI in the years to come. Important to note that Microsoft Research unit will no longer be its on discrete unit -- it will be combined with this new AI effort. Research had 1,000 people in it also working on areas like quantum computing, and that will now be rolled into the bigger research and development efforts being announced today. Products that will fall under the new unit will include Information Platform, Cortana and Bing, and Ambient Computing and Robotics teams led by David Ku, Derrick Connell and Vijay Mital, respectively. The Microsoft AI and Research Group will encompass AI product engineering, basic and applied research labs, and New Experiences and Technologies (NExT), Microsoft said.
Google

Google Delays Release of Android Wear 2.0 To 2017 (techcrunch.com) 13

Google announced today the next generation of its smartwatch platform -- Android Wear 2.0 -- won't be seeing the light of day this year. The company says that it will release the final version of Android Wear 2.0 in early 2017. From a TechCrunch report: While Google never talked about a final release date for Wear 2.0, its original schedule called for about 30 weeks of alpha and beta testing, which would have put the release date somewhere around the middle of December. Google, however, now says that it has gotten "tons of great feedback from the developer community about Android Wear 2.0" and that it is "committed to improve and iterate based on them to ensure a great user experience." Because of this, the plan is to continue the preview program into early 2017 at which time the first watches will receive the new version.CNET reported recently that three of the top Android Wear smartwatches maker -- LG, Huawei and Motorola -- had confirmed that they won't be releasing new smartwatches until next year, at least.
AI

Facebook, Amazon, Google, IBM, and Microsoft Come Together To Create Historic Partnership On AI (techcrunch.com) 87

An anonymous reader quotes a report from TechCrunch: In an act of self-governance, Facebook, Amazon, Alphabet, IBM, and Microsoft came together today to announce the launch the new Partnership on AI. The group is tasked with conducting research and promoting best practices. Practically, this means that the group of tech companies will come together frequently to discuss advancements in artificial intelligence. The group also opens up a formal structure for communication across company lines. It's important to remember that on a day to day basis, these teams are in constant competition with each other to develop the best products and services powered by machine intelligence. Financial support will be coming from the initial tech companies who are members of the group, but in the future membership and involvement is expected to increase. User activists, non-profits, ethicists, and other stakeholders will be joining the discussion in the coming weeks. The organizational structure has been designed to allow non-corporate groups to have equal leadership side-by-side with large tech companies. As of today's launch, companies like Apple, Twitter, Intel and Baidu are missing from the group. Though Apple is said to be enthusiastic about the project, their absence is still notable because the company has fallen behind in artificial intelligence when compared to its rivals -- many of whom are part of this new group. The new organization really seems to be about promoting change by example. Rather than preach to the tech world, it wants to use a standard open license to publish research on topics including ethics, inclusivity, and privacy.
Businesses

D-Wave's 2,000-Qubit Quantum Annealing Computer Now 1,000x Faster Than Previous Generation (tomshardware.com) 116

An anonymous reader quotes a report from Tom's Hardware: D-Wave, a Canadian company developing the first commercial "quantum computer," announced its next-generation quantum annealing computer with 2,000 qubits, which is twice as many as its previous generation had. One highly exciting aspect of quantum computers of all types is that beyond the seemingly Moore's Law-like increase in number of qubits every two years, their performance increases much more than just 2x, unlike with regular microprocessors. This is because qubits can hold a value of 0, 1, or a superposition of the two, making quantum systems able to deal with much more complex information. If D-Wave's 2,000-qubit computer is now 1,000 faster than the previous 1,000-qubit generation (D-Wave 2X), that would mean that, for the things Google tested last year, it should now be 100 billion times faster than a single-core CPU. The new generation also comes with control features, which allows users to modify how D-Wave's quantum system works to better optimize their solutions. These control features include the following capabilities: The ability to tune the rate of annealing of individual qubits to enhance application performance; The ability to sample the state of the quantum computer during the quantum annealing process to power hybrid quantum-classical machine learning algorithms that were not previously possible; The ability to combine quantum processing with classical processing to improve the quality of both optimization and sampling results returned from the system. D-Wave's CEO, Vern Brownell, also said that D-Wave's quantum computers could also be used for machine learning task in ways that wouldn't be possible on classical computers. The company is also training the first generation of programmers to develop applications for D-Wave quantum systems. Last year, Google said that D-Wave's 1,000 qubit computer proved to be 100 million times faster than a classical computer with a single core: "We found that for problem instances involving nearly 1,000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 10^8 times faster than simulated annealing running on a single core," said Hartmut Neven, Google's Director of Engineering.
Businesses

55 Percent Of Online Shoppers Start Their Product Searches On Amazon (recode.net) 141

Another year, another data point showing Amazon has surpassed Google as the default search engine for shopping, a report on Recode reads. Fifty-five percent of people in the U.S. now start their online shopping trips on Amazon.com, according to results from a 2,000-person survey commissioned by the e-commerce startup BloomReach. That stat marks a 25 percent increase from the same survey last year, when 44 percent of online shoppers said they turned to Amazon first. From the report: Over the same time, the percentage of shoppers who start product searches on search engines like Google dropped from 34 percent to 28 percent. The number of online shoppers who check out a retailer's website (other than Amazon) first also shrunk, from 21 percent to 16 percent.
AI

Google's New Translation Software Powered By Brainlike Artificial Intelligence (sciencemag.org) 87

sciencehabit quotes a report from Science Magazine: Today, Google rolled out a new translation system that uses massive amounts of data and increased processing power to build more accurate translations. The new system, a deep learning model known as neural machine translation, effectively trains itself -- and reduces translation errors by up to 87%. When compared with Google's previous system, the neural machine translation system scores well with human reviewers. It was 58% more accurate at translating English into Chinese, and 87% more accurate at translating English into Spanish. As a result, the company is planning to slowly replace the system underlying all of its translation work -- one language at a time. The report adds: "The new method, reported today on the preprint server arXiv, uses a total of 16 processors to first transform words into a value known as a vector. What is a vector? 'We don't know exactly,' [Quoc Le, a Google research scientist in Mountain View, California, says.] But it represents how related one word is to every other word in the vast dictionary of training materials (2.5 billion sentence pairs for English and French; 500 million for English and Chinese). For example, 'dog' is more closely related to 'cat' than 'car,' and the name 'Barack Obama' is more closely related to 'Hillary Clinton' than the name for the country 'Vietnam.' The system uses vectors from the input language to come up with a list of possible translations that are ranked based on their probability of occurrence. Other features include a system of cross-checks that further increases accuracy and a special set of computations that speeds up processing time."
Mozilla

Mozilla Has Stopped All Commercial Development On Firefox OS -- Explains What It Plans To Do With Code Base (google.com) 97

Mozilla announced last year that Firefox OS initiative of shipping phones with commercial partners did not bring the returns it sought. The company earlier this year hinted that it intends to shut the project. It is now sharing how it will deal with Firefox OS code base going forward. From their post: We would stop our efforts to build and ship smartphones through carrier partners and pivot our efforts with Firefox OS to explore opportunities for new use cases in the world of connected devices. Firefox OS was transitioned to a Tier 3 platform from the perspective of support by Mozilla's Platform Engineering organization. That meant as of January 31, 2016 no Mozilla Platform Engineering resources would be engaged to provide ongoing support and all such work would be done by other contributors. For some period of time that work would be done by Mozillaâ(TM)s Connected Devices team. We had ideas for other opportunities for Firefox OS, perhaps as a platform for explorations in the world of connected devices, and perhaps for continued evolution of Firefox OS TV. To allow for those possibilities, and to provide a stable release for commercial TV partners, development would continue on a Firefox OS 2.6 release. In parallel with continued explorations by the Connected Devices team, we recognized there was interest within the Mozilla community in carrying forward work on Firefox OS as a smartphone platform, and perhaps even for other purposes. A Firefox OS Transition Project was launched to perform a major clean-up of the B2G code bringing it to a stable end state so it could be passed into the hands of the community as an open source project. In the spring and summer of 2016 the Connected Devices team dug deeper into opportunities for Firefox OS. They concluded that Firefox OS TV was a project to be run by our commercial partner and not a project to be led by Mozilla. Further, Firefox OS was determined to not be sufficiently useful for ongoing Connected Devices work to justify the effort to maintain it. This meant that development of the Firefox OS stack was no longer a part of Connected Devices, or Mozilla at all. Firefox OS 2.6 would be the last release from Mozilla. Today we are announcing the next phase in that evolution. While work at Mozilla on Firefox OS has ceased, we very much need to continue to evolve the underlying code that comprises Gecko, our web platform engine, as part of the ongoing development of Firefox. In order to evolve quickly and enable substantial new architectural changes in Gecko, Mozilla's Platform Engineering organization needs to remove all B2G-related code from mozilla-central. This certainly has consequences for B2G OS. For the community to continue working on B2G OS they will have to maintain a code base that includes a full version of Gecko, so will need to fork Gecko and proceed with development on their own, separate branch.
Mozilla

Mozilla's Proposed Conclusion: Game Over For WoSign and Startcom? (google.com) 111

Reader Zocalo writes: Over the last several months Mozilla has been investigating a large number of breaches of what Mozilla deems to be acceptable CA protocols by the Chinese root CA WoSign and their perhaps better known subsidiary StartCom, whose acquisition by WoSign is one of the issues in question. Mozilla has now published their proposed solution (GoogleDocs link), and it's not looking good for WoSign and Startcom. Mozilla's position is that they have lost trust in WoSign and, by association StartCom, with a proposed action to give WoSign and StartCom a "timeout" by distrusting any certificates issued after a date to be determined in the near future for a period of one year, essentially preventing them issuing any certificates that will be trusted by Mozilla. Attempts to circumvent this by back-dating the valid-from date will result in an immediate and permanent revocation of trust, and there are some major actions required to re-establish that trust at the end of the time out as well.
This seems like a rather elegant, if somewhat draconian, solution to the issue of what to do when a CA steps out of line. Revoking trust for certificates issued after a given date does not invalidate existing certificates and thereby inconvenience their owners, but it does put a severe -- and potentially business-ending -- penalty on the CA in question. Basically, WoSign and StartCom will have a year where they cannot issue any new certificates that Mozilla will trust, and will also have to inform any existing customers that have certificate renewals due within that period they cannot do so and they will need to go else where -- hardly good PR!

What does Slashdot think? Is Mozilla going too far here, or is their proposal justified and reasonable given WoSign's actions, making a good template for potential future breaches of trust by root CAs, particularly in the wake of other CA trust breaches by the likes of CNNIC, DigiNotar, and Symantec?

Android

Google Is Planning a 'Pixel 3' Laptop Running 'Andromeda' OS For Release in Q3 2017 (androidpolice.com) 56

Google plans to launch a laptop next year with Pixel branding which will run 'Andromeda' operating system, reports AndroidPolice, citing sources. Andromeda is a hybrid of Android and Chrome OS, the report adds. Pixel, Chrome OS and Android teams have been working on this project, dubbed Bison, for years, apparently. From the report: Bison is planned as an ultra-thin laptop with a 12.3" display, but Google also wants it to support a "tablet" mode. It's unclear to us if this means Bison will be a Lenovo Yoga-style convertible device, or a detachable like Microsoft's Surface Book, but I'm personally leaning on the former given how thin it is. Powering it will be either an Intel m3 or i5 Core processor with 32 or 128GB of storage and 8 or 16GB of RAM. This seems to suggest there will be two models. It will also feature a fingerprint scanner, two USB-C ports, a 3.5mm jack (!), a host of sensors, stylus support (a Wacom pen will be sold separately), stereo speakers, quad microphones, and a battery that will last around 10 hours. The keyboard will be backlit, and the glass trackpad will use haptic and force detection similar to the MacBook. Google plans to fit all of this in a form factor under 10mm in thickness, notably thinner than the aforementioned Apple ultraportable.The report, however, adds that it is likely that Google might revise the specifications by the time of its launch, which is slated to happen sometime in Q3 2017.
Programming

Which Programming Language Is Most Popular - The Final Answer? (zdnet.com) 398

An anonymous Slashdot reader writes: Following a common technique among political pollsters, a technology columnist combined the results from various measures of programming language popularity for a more definitive answer about the most important languages to study. He used IEEE Spectrum's interactive list of the top programming languages, which lets you adjust the weight given to the number of job listings and number or open source projects, then combined it with the TIOBE Index (which is based on search engine results), and the PYPL Index, which checks the number of tutorials for each programming language on Google.

The results? "The top cluster contains Java, C, Python, and C++. Without a doubt, you should attain familiarity with these four languages." He points out they're not tied to a specific programming platform, unlike languages in the second cluster -- JavaScript, C#, PHP, and Swift -- while the last two languages in the top 10 were Objective-C and R. "The C-family of languages still dominates. Java, C++, C, C#, and even Objective-C are all C-based languages. If you're only going to learn one language, you should pick one of those." But his ultimate advice is to "learn multiple languages and multiple frameworks... Programming is not just an intellectual exercise. You have to actually make stuff."

Power

Amazon Pursues More Renewable Energy, Following Google, Apple, And Facebook (fortune.com) 85

An anonymous Slashdot reader writes: Amazon will open a 100-turbine, 253-megawatt wind farm in Texas by the end of next year -- generating enough energy to power almost 90,000 U.S. homes. Amazon already has wind farms in Indiana, North Carolina, and Ohio (plus a solar farm in Virginia), and 40% of the power for AWS already comes from renewable sources, but Amazon's long-term plan is to raise that to 100%.

But several of the world's largest tech companies are already pursuing their own aggressive renewable energy programs, according to Fortune. Google "has said it's the largest non-utility purchaser of renewable energy in the world. Apple claims that in 2015, 93% of its energy came from renewable sources, and its data centers are already 100% run on renewables (though that claim does rely on carbon trading). Facebook, which also uses Texas wind facilities, is aiming for 50% of its data center power to come from renewables by 2018. Even slightly smaller companies like Salesforce have made big commitments to renewable energy."

Last year for the first time utilities actually bought less than half the power produced by wind farms -- because tech companies, universities, and cities had already locked it down with long-term contracts.
The Internet

What Vint Cerf Would Do Differently (computerworld.com) 125

An anonymous Slashdot reader quotes ComputerWorld: Vint Cerf is considered a father of the internet, but that doesn't mean there aren't things he would do differently if given a fresh chance to create it all over again. "If I could have justified it, putting in a 128-bit address space would have been nice so we wouldn't have to go through this painful, 20-year process of going from IPv4 to IPv6," Cerf told an audience of journalists Thursday... For security, public key cryptography is another thing Cerf would like to have added, had it been feasible.

Trouble is, neither idea is likely to have made it into the final result at the time. "I doubt I could have gotten away with either one," said Cerf, who won a Turing Award in 2004 and is now vice president and chief internet evangelist at Google. "So today we have to retrofit... If I could go back and put in public key crypto, I probably would try."

Vint Cerf answered questions from Slashdot users back in 2011.
Yahoo!

Moving Beyond Flash: the Yahoo HTML5 Video Player (streamingmedia.com) 96

Slashdot reader theweatherelectric writes: Over on Streaming Media, Amit Jain from Yahoo has written a behind-the-scenes look at the development of Yahoo's HTML5 video player. He writes, "Adobe Flash, once the de-facto standard for media playback on the web, has lost favor in the industry due to increasing concerns over security and performance. At the same time, requiring a plugin for video playback in browsers is losing favor among users as well. As a result, the industry is moving toward HTML5 for video playback...

At Yahoo, our video player uses HTML5 across all modern browsers for video playback. In this post we will describe our journey to providing an industry-leading playback experience using HTML5, lay out some of the challenges we faced, and discuss opportunities we see going forward."

Yet another brick in the wall? YouTube and Twitch have already switched to HTML5, and last year Google started automatically converting Flash ads to HTML5.
Google

Google Open Sources Its Image-Captioning AI (zdnet.com) 40

An anonymous Slashdot reader quotes ZDNet: Google has open-sourced a model for its machine-learning system, called Show and Tell, which can view an image and generate accurate and original captions... The image-captioning system is available for use with TensorFlow, Google's open machine-learning framework, and boasts a 93.9 percent accuracy rate on the ImageNet classification task, inching up from previous iterations.

The code includes an improved vision model, allowing the image-captioning system to recognize different objects in images and hence generate better descriptions. An improved image model meanwhile aids the captioning system's powers of description, so that it not only identifies a dog, grass and frisbee in an image, but describes the color of grass and more contextual detail.

Open Source

Ask Slashdot: Who's Building The Open Source Version of Siri? (upon2020.com) 186

We're moving to a world of voice interactions processed by AI. Now Long-time Slashdot reader jernst asks, "Will we ever be able to do that without going through somebody's proprietary silo like Amazon's or Apple's?" A decade ago, we in the free and open-source community could build our own versions of pretty much any proprietary software system out there, and we did... But is this still true...? Where are the free and/or open-source versions of Siri, Alexa and so forth?

The trouble, of course, is not so much the code, but in the training. The best speech recognition code isn't going to be competitive unless it has been trained with about as many millions of hours of example speech as the closed engines from Apple, Google and so forth have been. How can we do that? The same problem exists with AI. There's plenty of open-source AI code, but how good is it unless it gets training and retraining with gigantic data sets?

And even with that data, Siri gets trained with a massive farm of GPUs running 24/7 -- but how can the open source community replicate that? "Who has a plan, and where can I sign up to it?" asks jernst. So leave your best answers in the comments. Who's building the open source version of Siri?

Slashdot Top Deals