stev0dundun 270005274B タグ:  team_red application ios qr_code qr facebook android hackathon 1,185 件のアクセス
JOSH CONSTINESunday, via TechCrunch
When Mark Zuckerberg called for a “Space Hackathon” to decorate Facebook’s massive new headquarters at 1 Hacker Way, he probably didn’t expect employees to take him so literally. A few scurried up to the roof with some tar paint, and now there’s a 42-foot wide QR code on the roof that’s visible from space.
Scanning it opens the new FB QR Code Page on Facebook which may host puzzles, jokes, and other flavor to humanize the company. For now you’ll need an airplane or Facebook security badge to get a look at it first-hand, but once indexed it should appear on your favorite satellite mapping website.
When the Hackathon was announced, most employees imagined beautifying the campus with posters and spray paint murals but Mark Pike had something bigger in mind. “It started with a comment on Zuck’s post. I wrote, “Hack yeah! I’d like to paint a gigantic QR code somewhere so we can RickRoll online maps, or point people to our careers site, or send them to a ‘Clarissa Explains it All’ GeoCities Page” Pike says.
At 8pm the night of the Hackathon, Pike and 5 others started on the project. By midnight they had a crew of 30, but it would take until daylight to see whether they screwed up the layout. An employee hacked a Canon camera’s firmware, strapped it to a home-made remote-controlled “frankencopter”, and flew it over the roof.
When it landed with the photos the team finally found out the code really works. Facebook let us on the roof to check it out, and it’s pretty epic. You can try scanning the big picture above with Scan for iOS, QR Droid for Android, or most other QR apps.
What started in a dorm room has matured into one of the world’s most powerful companies. Now there are database architecture and business models to worry about. But with a silly 42-foot wide QR code on the roof, Facebook proves it’s still young at heart.
stev0dundun 270005274B タグ:  social_networking like_button techcrunch zuckerberg team_red facebook 873 件のアクセス
-JON ORLIN, via TechCrunch
In 2010, TechCrunch broke the news that Facebook was going to release a “Like” button for the whole darn Internet. Now, TechCrunch has learned Facebook is considering a “Hate” button as well.
According to Facebook’s S-1 filing, users are now generating 2.7 billion Likes and Comments per day. With the Hate button, Facebook expects to at least double that. The S-1 noted “popular Pages on Facebook include Lady Gaga, Disney, and Manchester United, each of which has move than 20 million Likes.” Many inside the company think the Hates could easily top that.
When the original Like button was announced, Mark Zuckerberg made a bold prediction there would be over 1 billion Likes across the web in just the first 24 hours. Sources at Facebook say Mark is estimating 2 billion Hates on the first day. Facebook studies have shown the sad fact that people hate things on the Internet more than they like things. There’s also an internal debate on whether the new button should be called “Hate” or “Dislike.”
Since the tiny Like button makes up such a huge part of Facebook’s revenue, the introduction of the Hate button could raise Facebook’s valuation further ahead of the IPO.
Facebook has already shown they are open to changing the Like button. Earlier this month, Facebook Mobile changed the 2-Click Like button with a 1-Click Like bar.
The company has also experimented with the “Fax” button, as TechCrunch was also the first to notice.
Other buttons under consideration are the “Meh”, “Love”, “Who Cares”, and “+11″ but there is also a fear this could lead to a button explosion.
Our sources say the Hate button is not a sure thing. It’s being heavily debated inside the social networking company. This new feature would fit with Facebook’s mission to “build tools to help people connect with the people they want and share what they want” whether that’s love or hate.
While the product and sales teams favor the idea, many inside Facebook oppose it. That view is best summed up by Robert Scoble who wrote “I really hope we never see a hate button that gets wide adoption. The world has enough hate as it is.”
Since Facebook is in their quiet period ahead of their IPO, Facebook had no official comment on this report.
Piriform is a private limited company founder in London, UK in 2004. The company is located in 78 York Street, London United Kingdom. CCleaner is one of the main products that are currently developed by Piriform. CCleaner cleans unwanted leftover files from computer program such as Internet Explorer, Google Chrome, Opera, Safari, Windows Media Player, eMule, Google Toolbar, Netsacape, Microsoft office, Nero, Adobe Acrobat, Adobe Flash Player, Sun Java, WinRAR and other application. CCleaner also has function to clean up Windows Registry and an uninstall tool. CCleaner can be installed on both Windows or Mac OS X. The program is available in 47 languages and free.
stev0dundun 270005274B タグ:  cloud box.net dropbox google+ team_red google google_drive cloud_storage 1,327 件のアクセス
-Caleb Garling, via Wired.com/Cloudline
A purported leaked screenshot of a Google Drive download page. Image courtesy of TalkAndroid
This morning, Ars Technica’s Jon Brodkin reported on newly leaked images of Google Drive, a rumored cloud storage offering from the web storage king. Gmail already offers more than 7 gigabytes (GB) of email storage, so the question was, what will Google Drive offer. From these leaked images — the validity of which is unverified — the answer is 5GB, even though last week it was 2GB.
Why the screenshots would show different starting sizes is unclear. Assuming the images are real, they could have been taken from different stages of development, or may not be representative of the final version that Google will offer.
In the last few months rumors have swirled any many have speculated on when exactly Google is going to jump into the consumer cloud storage market. Companies like DropBox and Box both offer gigabytes of free storage so that anyone can access their PDF’s, pictures and documents from any web browser. Despite not yet having a dedicated service, Google is still the web’s storage giant, so such a product is inevitable.
Have your say: Will you take Google Drive for a ride if 5 GB is on offer? Should the folks at DropBox be worried?
-Mike Barton, via Wired.com/Cloudline
While coverage ahead of Oracle’s fiscal third quarter results yesterday focused on it losing ground to younger cloud rivals, my question of “But for how long?” did not take long to be answered, sort of.
“After a long period of testing … Oracle’s cloud applications will be generally available. We’ve named our cloud the Oracle Secure Cloud,” Oracle CEO Larry Ellison said during yesterday’s analyst call about its Q3 results.
Oracle President Mark Hurd also stated in the press release before the call, “…Fusion in the Cloud is winning with great success against niche HCM cloud vendors in the US and worldwide. Our modular, integrated platform of 100 apps available in the cloud or on-premise is a key differentiator.”
Over at Forbes, Victoria Barret highlights how Ellison could not resist going after big-fish rivals Salesforce.com and SAP after he planted the Oracle Secure Flag brand in the ground:
Here he couldn’t help but take aim at Salesforce.com, suggesting that the company run by his former protege, Marc Benioff, can’t offer the same level of security. Benioff has long ridiculed Ellison for selling legacy software systems not able to keep pace with the shift to cloud computing.”
Then Ellison swiftly moved on to SAP, explaining that the German rival hasn’t yet moved its heavy-duty business software suite to the cloud. “Six years ago we made the decision to write Fusion. It will take years for SAP to catch up,” he said. SAP’s Web offering, called Business ByDesign, seems so far limited to smaller customers. SAP in December announced 1,000 customers to the product. Then again, Ellison did not mention customer names for Fusion or Secure Cloud.
Oracle Secure Cloud, which will be available in the next few weeks, is a private cloud, rented by the month and managed by Oracle, but living behind a company’s own firewall in their data center. “Salesforce.com does not offer this kind of security in their cloud. This is a key advantage for us,” Ellison said during the call yesterday. “But by far our biggest application competitor is SAP, not Salesforce.com. And SAP does not even offer CRM, HCM and financial applications in the cloud to their large customers.”
“Six years ago we made the decision to rewrite our ERP and CRM suit for the cloud. We called it Fusion. SAP called it confusion,” Ellison said. “It will take years for SAP to catch up.”
Ellison, well known for his flamboyance and fierce competitiveness, even went so far as to question SAP’s sobriety with its focus on building Oracle competitor HANA. “When SAP, and, specifically Hasso Plattner, said they’re going to build this in-memory database and compete with Oracle, I said. God, get me the name of that pharmacist, they must be on drugs,” he told analysts yesterday. “That was interpreted by Hasso as Larry doesn’t believe in in-memory databases… We’ve been working on in-memory databases for 10 years. We have the world’s leading in-memory database. It’s called TimesTen.”
So that’s where Oracle has planted its flag with regard to the cloud, in contrast to Salesforce and Workday, and butting heads with SAP. Is that going to do the trick? Is Oracle keenly aware of what the marketplace wants, or is it putting its game face on given what it can offer in terms of cloud now? Do the RightNow and Taleo buys, in addition to Fusion (in the cloud) give it enough to go on? Have your say in the comments section, below.
stev0dundun 270005274B タグ:  feedback profit team_red google consumer consumer_surveys 945 件のアクセス
-Adario Strange, via PCMag.com
In recent years Google has taken its fair share of criticism from publishers as its Google News aggregation and AdWords micro-advertising have disrupted traditional publishing in major ways. But a new product quietly launched by Google this week might provide a powerful new business model for online publishing.
Google Consumer Surveys allows publishers to make money from running various micro-surveys on their sites. When a user visits a participating site, they will be presented with a survey before being allowed access to the content (text, video, or apps). Think of it as a soft paywall in which the user still gets the content for free, and doesn't need to register, but can't simply click the well-known "skip this ad" link to access the desired content. Once the short survey is filled out, the users gets her content for free, the publisher earns a small payment, and the company behind the survey gets the valuable market data it was looking for from a real, sometimes demographically specific person.
Large or small companies can target survey questions toward the general U.S. population for $0.10 per response (or $150.00 for 1,500 responses), or opt for demographic targeting at $0.50 per response ($750.00 for 1,500). Insights are grouped by demographics including income, location (U.S. Northeast, South, Midwest, West Coast), age (18-24, 25-34, 35-65+) and gender.
After setting up a survey, companies have the ability to view extremely detailed breakdowns of the survey answer data. Alcohol, tobacco, gambling, and pharmaceutical products are currently excluded from the program. Publishers already set up to use the survey tool with their content offerings include The Texas Tribune, the Star Tribune, and Adweek.
"The idea behind Google Consumer Surveys is to create a model that benefits everyone," said Google product manager Paul McDonald. "You get to keep enjoying your favorite online content, publishers have an additional option for making money from that content, and businesses have a new way of finding out what their customers want."
Upon further inspection, it does appear that Google may have finally discovered the Holy Grail for monetizing digital content in a way that benefits everyone. Few consumers have a problem filling out short, anonymous surveys, most online publishers have already learned that surveys are a fun way to engage visitors, particularly when it comes to niche sites, and large companies absolutely live and die on the vital data that market research provides regarding emerging trends, and current consumers tastes.
stev0dundun 270005274B タグ:  cloud ibm privacy transparency team_red information personal_data 855 件のアクセス
-Jon Udell, via Wired.com/Cloudline (sponsored by IBM)
As we migrate personal data to the cloud, it seems that we trade convenience for privacy. It’s convenient, for example, to access my address book from any connected device I happen to use. But when I park my address book in the cloud in order to gain this benefit, I expose my data to the provider of that cloud service.
When the service is offered for free, supported by ads that use my personal info to profile me, this exposure is the price I pay for convenient access to my own data. The provider may promise not to use the data in ways I don’t like, but I can’t be sure that promise will be kept.
Is this a reasonable trade-off?
For many people, in many cases, it appears to be. Of course we haven’t, so far, been given other choices. And other choices can exist. Storing your data in the cloud doesn’t necessarily mean, for example, that the cloud operator can read all the data you put there. There are ways to transform it so that it’s useful only to you, or to you and designated others, or to the service provider but only in restricted ways.
Early Unix systems kept users’ passwords in an unprotected system file, /etc/passwd, that anyone could read. This seemed crazy when I first learned about it many years ago. But there was a method to the madness. The file was readable, so anyone could see the usernames. But the passwords were transformed, using a cryptographic hash function, into gibberish. The system didn’t need to remember your cleartext password. It only needed to verify that when you typed your cleartext password at logon, the operation that originally encoded its /etc/passwd equivalent would, when repeated, yield a matching result.
Everything old is new again. When it was recently discovered that some iPhone apps were uploading users’ contacts to the cloud, one proposed remedy was to modify iOS to require explicit user approval. But in one typical scenario that’s not a choice a user should have to make. A social service that uses contacts to find which of a new user’s friends are already members doesn’t need cleartext email addresses. If I upload hashes of my contacts, and you upload hashes of yours, the service can match hashes without knowing the email addresses from which they’re derived.
In the post Hashing for privacy in social apps, Matt Gemmell shows how it can be done. Why wasn’t it? Not for nefarious reasons, Gemmell says, but rather because developers simply weren’t aware of the option to uses hashes as a proxy for email addresses.
The best general treatise I’ve read on this topic is Peter Wayner’s Translucent Databases. I reviewed the first edition a decade ago; the revised and expanded second edition came out in 2009. A translucent system, Peter says, “lets some light escape while still providing a layer of secrecy.”
Here’s my favorite example from Peter’s book. Consider a social app that enables parents to find available babysitters. A conventional implementation would store sensitive data — identities and addresses of parents, identities and schedules of babysitters — as cleartext. If evildoers break into the service, there will be another round of headlines and unsatisfying apologies.
A translucent solution encrypts the sensitive data so that it is hidden even from the operator of the service, while yet enabling the two parties (parents, babysitters) to rendezvous.
How many applications can benefit from translucency? We won’t know until we start looking. The translucent approach doesn’t lie along the path of least resistance, though. It takes creative thinking and hard work to craft applications that don’t unnecessarily require users to disclose, or services to store, personal data. But if you can solve a problem in a translucent way, you should. We can all live without more of those headlines and apologies.
stev0dundun 270005274B タグ:  greece team_red thepiratebay low_orbit_server_stations torrent loss airspace 760 件のアクセス
FOR IMMEDIATE RELEASE.
Athens, Greece - Political power in Athens, Greece, today signed an agreement with representatives for The Pirate Bay (TPB) about exclusive usage of the greek airspace at 8000-9000ft.
- This might come as a shock for many but we believe that we need to both raise money to pay our debts as well as encourage creativity in new technology. Greece wants to become a leader in LOSS, says Lucas Papadams, the new and crisply elected Prime Minister of Greece.
LOSS that he is referring to is not the state of finances in the country but rather Low Orbit Server Stations, a new technology recently invented by TPB. Being a leader for a long time in other types of LOSS, TPB has been working hard on making LOSS a viable solution for achieving 100% uptime for their services.
- Greece is one of few countries that understands the value of LOSSes. We have been talking to them ever since we came up with the solution seeing that we have equal needs of being able to find financially sustainable solutions for our projects, says Win B. Stones, head of R&D at TPB.
The agreement gives TPB a 5 year license to use and re-distribute usage of the airspace at 8000-9000 ft as well as unlimited usage of the radio space between 2350 to 24150 MHz. Due to the financial situation of both parties TPB will pay the costs with digital goods, sorely needed by the citizens of Greece.
stev0dundun 270005274B タグ:  wired web_standards high_def mobile ipad browsers apple web_basics team_red 1,149 件のアクセス
-Scott Gilbertson, via WebMonkey.com
The high-resolution retina display iPad has one downside — normal resolution images look worse than on lower resolution displays. On the web that means that text looks just fine, as does any CSS-based art, but photographs look worse, sometimes even when they’re actually high-resolution images.
Pro photographer Duncan Davidson was experimenting with serving high-resolution images to the iPad 3 when he ran up against what seemed to be a limit to the resolution of JPG images in WebKit. Serving small high-resolution images — in the sub-2000px range — works great, but replacing 1000px wide photographs with 2000px wide photos actually looks worse due to downsampling.
The solution (turns out) is to go back to something you probably haven’t used in quite a while — progressive JPGs. It’s a clever solution to a little quirk in Mobile Safari’s resource limitations. Read Davidson’s follow-up post for more details, and be sure to look at the example image if you’ve got a new iPad because more than just a clever solution, this is what the future of images on web will look like.
As Davidson says:
For the first time, I’m looking at a photograph I’ve made on a screen that has the same sort of visceral appeal as a print. Or maybe a transparency laying on a lightbox. Ok, maybe not quite that good, but it’s pretty incredible. In fact, I really shouldn’t be comparing it to a print or a transparency at all. Really, it’s its own very unique experience.
But how could you go about serving the higher res image to just those screens with high enough resolution and fast enough connections to warrant it?
So what’s a web developer with high-res images to show off supposed to do? Well, right now you’re going to have to decide between all or nothing. Or you can use a hack like one of the less-than-ideal responsive image solutions we’ve covered before.
Right now visitors with the new iPad are probably a minority for most websites, so not that many people will be affected by low-res or poorly rendered high-res images. But Microsoft is already prepping Windows 8 for high-res retina-style screens and Apple is getting ready to bring the same concept to laptops.
The high-res future is coming fast and the web needs to evolve just as fast.
In the long run that means the web is going to need a real responsive image solution; something that’s part of HTML itself. An new HTML element like the proposed <picture> tag is one possible solution. The picture element would work much like the video tag, with code that looks something like this:
The browser uses this code to choose which image to load based on the current screen width.
The picture element would solve one part of the larger problem, namely serving the appropriate image to the appropriate screen resolution. But screen size isn’t the only consideration; we also need a way to measure the bandwidth available.
At home on my Wi-Fi connection I’d love to get Davidson’s high-res images on my iPad. When I’m out and about using a 3G connection it would be better to skip that extra overhead in favor of faster page load times.
Ideally browsers would send more information about the user’s environment along with each HTTP request. Think screen size, pixel density and network connection speed. Developers could then use that information to make a better-informed guess about which images it to serve. Unfortunately, it seems unlikely we’ll get such tools standardized and widely supported before the high-res world overtakes the web. With any server-side solution to the bandwidth problem still far off on the horizon, navigator.connection will become even more valuable in the mean time.
Further complicating the problem are two additional factors, data caps on mobile connections and technologies like Apple’s AirPlay. The former means that even if I have a fast LTE connection and a high-resolution screen I still might not want to use my limited data allotment to download high-res images.
AirPlay means I can browse to a site with my phone — which would likely trigger smaller images and videos since it’s a smaller screen — but then project the result on a huge HD TV screen. This is not even a hypothetical problem, you can experience it today with PBS’s iPhone app and AirPlay.
Want to help figure out how the web needs to evolve and what new tools we’re going to need? Keep an eye on the W3C’s Responsive Images community group, join the mailing list and don’t be shy about contributing. Post your experiments on the web and document your findings like Davidson and countless others are already doing.
It’s not going to happen overnight, but eventually the standards bodies and the browser makers are going to start implementing solutions and the more test cases that are out there, the more experimenting web developers have done, the better those solutions will be. It’s your web after all, so make it better.
stev0dundun 270005274B タグ:  http protocol internet google spdy backend web_basics team_red web_standards microsoft 1,093 件のアクセス
-Scott Gilbertson, via WedMonkey.com
Microsoft wants in on the drive to speed up the web. The company plans to submit its proposal for a faster internet protocol to the standards body charged with creating HTTP 2.0.
Not coincidentally, that standards body, the Internet Engineering Task Force (IETF), is meeting this week to discuss the future of the venerable Hypertext Transfer Protocol, better known as HTTP. On the agenda is creating HTTP 2.0, a faster, modern approach to internet communication.
One candidate for HTTP 2.0 is Google’s SPDY protocol. Pronounced “speedy,” Google’s proposal would replace the HTTP protocol — the language currently used when your browser talks to a web server. When you request a webpage or a file from a server, chances are your browser sends that request using HTTP. The server answers using HTTP, too. This is why “http” appears at the beginning of most web addresses.
The SPDY protocol handles all the same tasks as HTTP, but SPDY can do it all about 50 percent faster. Chrome and Firefox both support SPDY and several large sites, including Google and Twitter, are already serving pages over SPDY where possible.
Part of the IETF’s agenda this week is to discuss the SPDY proposal, and the possibility of turning it into a standard.
But now Microsoft is submitting another proposal for the IETF to consider.
Microsoft’s new HTTP Speed+Mobility lacks a catchy name, but otherwise appears to cover much of the same territory SPDY has staked out. Though details on exactly what HTTP Speed+Mobility entails are thin, judging by the blog post announcing it, HTTP Speed+Mobility builds on SPDY but also includes improvements drawn from work on the HTML5 WebSockets API. The emphasis is on not just the web and web browsers, but mobile apps.
“We think that apps — not just browsers — should get faster,” writes Microsoft’s Jean Paoli, General Manager of Interoperability Strategy.
To do that, Microsoft’s HTTP Speed+Mobility “starts from both the Google SPDY protocol and the work the industry has done around WebSockets.” What’s unclear from the initial post is exactly where HTTP Speed+Mobility goes from that hybrid starting point.
But clearly Microsoft isn’t opposed to SPDY. “SPDY has done a great job raising awareness of web performance and taking a ‘clean slate’ approach to improving HTTP,” writes Paoli. “The main departures from SPDY are to address the needs of mobile devices and applications.”
SPDY co-inventor Mike Belshe writes on Google+ that he welcomes Microsoft’s efforts and looks forward to “real-world performance metrics and open source implementations so that we can all evaluate them.”
Belshe also notes that Microsoft’s implication that SPDY is not optimized for mobile “is not true.” Belshe says that the available evidence suggests that developers are generally happy using SPDY in mobile apps, “but it could always be better, of course.”
The process of creating a faster HTTP replacement will not mean simply picking any one vendor’s protocol and standardizing it. Hopefully the IETF will take the best ideas from all sides and combine them into a single protocol that can speed up the web. The exact details — and any potential speed gains — from Microsoft’s HTTP Speed+Mobility contribution remain to be seen, but the more input the IETF gets the better HTTP 2.0 will likely be.