Ventrilo is a Voice over IP
(VoIp) software that includes text chart. The program was released on August 3,
2002 by developer Brian Knapp. The program can be installed in Mac OS X,
Microsoft Windows, Linux, and other variants of Unix (Server). The software is licensed
as Proprietary Freeware. Users can rent a hosting server and then connect to
Ventrilo using hosting address and login information from the rented
server. Users can create many channel
within the hosting server and share the login information with their friends so
they can join the channel and communicate with each other. Ventrilo currently supports GSM Full Rate and
Speedx as their main codecs.
When Mark Zuckerberg called for a “Space Hackathon” to decorate Facebook’s massive new headquarters at 1 Hacker Way, he probably didn’t expect employees to take him so literally. A few scurried up to the roof with some tar paint, and now there’s a 42-foot wide QR code on the roof that’s visible from space.
Scanning it opens the new FB QR Code Page on Facebook which may host puzzles, jokes, and other flavor to humanize the company. For now you’ll need an airplane or Facebook security badge to get a look at it first-hand, but once indexed it should appear on your favorite satellite mapping website.
When the Hackathon was announced, most employees imagined beautifying the campus with posters and spray paint murals but Mark Pike had something bigger in mind. “It started with a comment on Zuck’s post. I wrote, “Hack yeah! I’d like to paint a gigantic QR code somewhere so we can RickRoll online maps, or point people to our careers site, or send them to a ‘Clarissa Explains it All’ GeoCities Page” Pike says.
At 8pm the night of the Hackathon, Pike and 5 others started on the project. By midnight they had a crew of 30, but it would take until daylight to see whether they screwed up the layout. An employee hacked a Canon camera’s firmware, strapped it to a home-made remote-controlled “frankencopter”, and flew it over the roof.
When it landed with the photos the team finally found out the code really works. Facebook let us on the roof to check it out, and it’s pretty epic. You can try scanning the big picture above with Scan for iOS, QR Droid for Android, or most other QR apps.
What started in a dorm room has matured into one of the world’s most powerful companies. Now there are database architecture and business models to worry about. But with a silly 42-foot wide QR code on the roof, Facebook proves it’s still young at heart.
Safeway is one of the leading grocery stores in the
California. They provide services to millions of individuals at a time. Safeway
has also utilized social media to promote excellent customer service relation
withtheir customer. Safeway is
currently using Facebook, Twitter, and Blogs to reach out to their client base.
Having the ability to reach out to their cliental is a key way
to ensure future business and handle customer complaints as they arise.
In addition Safeway is trying to implement delivery services
to their clients. Social media portals are the best way to notify and inform
their prospective clients.
In 2010, TechCrunch broke the news that Facebook was going to release a “Like” button for the whole darn Internet. Now, TechCrunch has learned Facebook is considering a “Hate” button as well.
According to Facebook’s S-1 filing, users are now generating 2.7 billion Likes and Comments per day. With the Hate button, Facebook expects to at least double that. The S-1 noted “popular Pages on Facebook include Lady Gaga, Disney, and Manchester United, each of which has move than 20 million Likes.” Many inside the company think the Hates could easily top that.
When the original Like button was announced, Mark Zuckerberg made a bold prediction there would be over 1 billion Likes across the web in just the first 24 hours. Sources at Facebook say Mark is estimating 2 billion Hates on the first day. Facebook studies have shown the sad fact that people hate things on the Internet more than they like things. There’s also an internal debate on whether the new button should be called “Hate” or “Dislike.”
Since the tiny Like button makes up such a huge part of Facebook’s revenue, the introduction of the Hate button could raise Facebook’s valuation further ahead of the IPO.
Facebook has already shown they are open to changing the Like button. Earlier this month, Facebook Mobile changed the 2-Click Like button with a 1-Click Like bar.
The company has also experimented with the “Fax” button, as TechCrunch was also the first to notice.
Other buttons under consideration are the “Meh”, “Love”, “Who Cares”, and “+11″ but there is also a fear this could lead to a button explosion.
Our sources say the Hate button is not a sure thing. It’s being heavily debated inside the social networking company. This new feature would fit with Facebook’s mission to “build tools to help people connect with the people they want and share what they want” whether that’s love or hate.
While the product and sales teams favor the idea, many inside Facebook oppose it. That view is best summed up by Robert Scoble who wrote “I really hope we never see a hate button that gets wide adoption. The world has enough hate as it is.”
Since Facebook is in their quiet period ahead of their IPO, Facebook had no official comment on this report.
Research InMotion (RIM), a global leader in wireless
innovation, revolutionized the mobile industry with the introduction of the
BlackBerry solution in 1999. BlackBerry products and services are used by
millions of customers around the world to stay connected to the people and
content that matter most throughout their day.
BlackBerry offers their customers BlackBerry communities to join and
follow to receive and send information to the company. Their customer can follow them via Facebook,
Twitter, YouTube, Flickr, and their blogs.
This allows BlackBerry to communicate with their customers at any time
of the day, any day of the week.
In times of trouble, social media is a nice tool to help remedy a problem. Recently Visa's services were down for approximately 45min in certain areas of the US on Sunday April 1. Many Visa cardholders were extremely upset because they were unable to make purchases or pay bills. As of 5:17 April 1, spokeswoman Sandra Chu reported that the problem was unrelated to the security breach on Friday. On Friday there was a security breach at Global Payments Inc. in which it potentially affected both Visa and Mastercard customers. However, today's particular case is completely un related. Chu reported that the Sunday incident occurred from a Visa update.
Having social media at Visa's disposal, Visa was able to reduce the amount of speculation about the incident. Visa was able to relay facts to their customers to ensure that the issues are being handled accordingly.
Piriform is a private
limited company founder in London, UK in 2004. The company is located in 78
York Street, London United Kingdom. CCleaner is one of the main products that are
currently developed by Piriform. CCleaner cleans unwanted leftover files from
computer program such as Internet Explorer, Google Chrome, Opera, Safari,
Windows Media Player, eMule, Google Toolbar, Netsacape, Microsoft office, Nero,
Adobe Acrobat, Adobe Flash Player, Sun Java, WinRAR and other application.
CCleaner also has function to clean up Windows Registry and an uninstall tool. CCleaner
can be installed on both Windows or Mac OS X. The program is available in 47
languages and free.
The military has a data problem. More specifically, it has a too-much-data problem. Analysts have to sort through massive amounts of information collected by orbiting surveillance drones and satellites, or finding the data trails left behind by spies inside defense networks. Sorting through all this data is also necessary for making unmanned vehicles more autonomous.
Bring on the White House’s new “big data” research initiative. Announced this morning, the plan aims to invest “more than $200 million” in six government agencies to develop systems to “extract knowledge and insights from large and complex collections of digital data,” according to a White House statement (.pdf). That means anything too large for normal software to handle, meaning data sets of at least dozens of terabytes, at minimum. The biggest beneficiary of all this could be the Department of Defense.
The Pentagon already spends hundreds of millions annually on “big data”-esque problems. The initiative announced today could add to that kitty up to $60 million per year for new research projects. That includes a $25 million yearly sum for a new Darpa data mining program called XDATA, which is broadly defined as a tool to analyze large amounts of meta-data and “unstructured” data like message traffic. (In comparison, the Department of Energy is receiving only $25 million in funding for a new data mining institute and the National Science Foundation is being granted $13.4 million.)
Where is all the rest of that defense research going? Several places, and a lot of it for helping drones crunch the massive amounts of information pulled from sensors.
“The Department of Defense if placing a big bet on big data.” Zachary Lemnios, the Pentagon’s research and engineering chief, told reporters on Thursday. “We are within sight of a new generation of systems that understand and interpret the real world with computer speed, computer precision and human agility. These systems will not only be central to helping our commanders and analysts make sense of the huge volumes of data our military sensors collect, they will also support multiple missions.
Some of these systems, like Darpa’s Mind’s Eye, seeks to develop “visual intelligence” in aerial sensors, which would give military computers the ability to connect visual data with pre-written cues. Effectively, that could mean giving drones the tools to spot enemy soldiers automatically. Other programs likely to benefit include the Insight program, which can help drones spot potential threats on the battlefield.
That information is “growing rapidly in both volume and complexity,” Darpa acting director Ken Gabriel said. “From scraps of paper to hard drives, to overhead imagery and intercepts — the data collected is often imperfect, incomplete and heterogeneous. This trend is further accelerated by the proliferation of various digital devices and the internet. All of which is used by our adversaries to operate and hide in this data terrain. The sheer volume of information itself is creating a background clutter.”
Clutter so thick, even a quarter-billion dollars in investments may not be enough to cut through.
A purported leaked screenshot of a Google Drive download page. Image courtesy of TalkAndroid
This morning, Ars Technica’s Jon Brodkin reported on newly leaked images of Google Drive, a rumored cloud storage offering from the web storage king. Gmail already offers more than 7 gigabytes (GB) of email storage, so the question was, what will Google Drive offer. From these leaked images — the validity of which is unverified — the answer is 5GB, even though last week it was 2GB.
Why the screenshots would show different starting sizes is unclear. Assuming the images are real, they could have been taken from different stages of development, or may not be representative of the final version that Google will offer.
In the last few months rumors have swirled any many have speculated on when exactly Google is going to jump into the consumer cloud storage market. Companies like DropBox and Box both offer gigabytes of free storage so that anyone can access their PDF’s, pictures and documents from any web browser. Despite not yet having a dedicated service, Google is still the web’s storage giant, so such a product is inevitable.
Have your say: Will you take Google Drive for a ride if 5 GB is on offer? Should the folks at DropBox be worried?
Adobe helps businesses
thrive in a world where media is pervasive, and where marketing increasingly is
being held accountable for business results. They are the leader in delivering
solutions that let customers produce, distribute, and realize value from great
content, whether in media and publishing or in digital marketing. Adobe is
changing the world through digital experiences.
content authoring solutions lead the industry, enabling our customers to more
effectively produce, distribute, and monetize digital content. They allow their customers to contact them
via Facebook, Twitter, email, and phone.
Adobe delivers the most innovative solutions for optimizing marketing campaigns
and maximizing return on every marketing dollar.
While coverage ahead of Oracle’s fiscal third quarter results yesterday focused on it losing ground to younger cloud rivals, my question of “But for how long?” did not take long to be answered, sort of.
“After a long period of testing … Oracle’s cloud applications will be generally available. We’ve named our cloud the Oracle Secure Cloud,” Oracle CEO Larry Ellison said during yesterday’s analyst call about its Q3 results.
Oracle President Mark Hurd also stated in the press release before the call, “…Fusion in the Cloud is winning with great success against niche HCM cloud vendors in the US and worldwide. Our modular, integrated platform of 100 apps available in the cloud or on-premise is a key differentiator.”
Over at Forbes, Victoria Barret highlights how Ellison could not resist going after big-fish rivals Salesforce.com and SAP after he planted the Oracle Secure Flag brand in the ground:
Here he couldn’t help but take aim at Salesforce.com, suggesting that the company run by his former protege, Marc Benioff, can’t offer the same level of security. Benioff has long ridiculed Ellison for selling legacy software systems not able to keep pace with the shift to cloud computing.”
Then Ellison swiftly moved on to SAP, explaining that the German rival hasn’t yet moved its heavy-duty business software suite to the cloud. “Six years ago we made the decision to write Fusion. It will take years for SAP to catch up,” he said. SAP’s Web offering, called Business ByDesign, seems so far limited to smaller customers. SAP in December announced 1,000 customers to the product. Then again, Ellison did not mention customer names for Fusion or Secure Cloud.
Oracle Secure Cloud, which will be available in the next few weeks, is a private cloud, rented by the month and managed by Oracle, but living behind a company’s own firewall in their data center. “Salesforce.com does not offer this kind of security in their cloud. This is a key advantage for us,” Ellison said during the call yesterday. “But by far our biggest application competitor is SAP, not Salesforce.com. And SAP does not even offer CRM, HCM and financial applications in the cloud to their large customers.”
“Six years ago we made the decision to rewrite our ERP and CRM suit for the cloud. We called it Fusion. SAP called it confusion,” Ellison said. “It will take years for SAP to catch up.”
Ellison, well known for his flamboyance and fierce competitiveness, even went so far as to question SAP’s sobriety with its focus on building Oracle competitor HANA. “When SAP, and, specifically Hasso Plattner, said they’re going to build this in-memory database and compete with Oracle, I said. God, get me the name of that pharmacist, they must be on drugs,” he told analysts yesterday. “That was interpreted by Hasso as Larry doesn’t believe in in-memory databases… We’ve been working on in-memory databases for 10 years. We have the world’s leading in-memory database. It’s called TimesTen.”
So that’s where Oracle has planted its flag with regard to the cloud, in contrast to Salesforce and Workday, and butting heads with SAP. Is that going to do the trick? Is Oracle keenly aware of what the marketplace wants, or is it putting its game face on given what it can offer in terms of cloud now? Do the RightNow and Taleo buys, in addition to Fusion (in the cloud) give it enough to go on? Have your say in the comments section, below.
Over 1,100 games are
available to purchase, download, and play from any computer. Check out the new
releases, indie hits, casual favorites and everything in between. Find someone
to play with, meet up with friends, connect with groups of similar interests,
and host and join chats, matches, and tournaments. See when your friends are
online or playing games and easily join the same games together. Chat with your
buddies, or use your microphone to communicate in any game. The Steam Community
is comprised of people who play all sorts of PC and Mac games. Now it's easy to find your friends online, organize groups, and join
chats. Steam offers their customers an
online community where they release news and updates as well as allow them to
collaborate with each other and contact their customers directly.
In recent years Google has taken its fair share of criticism from publishers as its Google News aggregation and AdWords micro-advertising have disrupted traditional publishing in major ways. But a new product quietly launched by Google this week might provide a powerful new business model for online publishing.
Google Consumer Surveys allows publishers to make money from running various micro-surveys on their sites. When a user visits a participating site, they will be presented with a survey before being allowed access to the content (text, video, or apps). Think of it as a soft paywall in which the user still gets the content for free, and doesn't need to register, but can't simply click the well-known "skip this ad" link to access the desired content. Once the short survey is filled out, the users gets her content for free, the publisher earns a small payment, and the company behind the survey gets the valuable market data it was looking for from a real, sometimes demographically specific person.
Large or small companies can target survey questions toward the general U.S. population for $0.10 per response (or $150.00 for 1,500 responses), or opt for demographic targeting at $0.50 per response ($750.00 for 1,500). Insights are grouped by demographics including income, location (U.S. Northeast, South, Midwest, West Coast), age (18-24, 25-34, 35-65+) and gender.
After setting up a survey, companies have the ability to view extremely detailed breakdowns of the survey answer data. Alcohol, tobacco, gambling, and pharmaceutical products are currently excluded from the program. Publishers already set up to use the survey tool with their content offerings include The Texas Tribune, the Star Tribune, and Adweek.
"The idea behind Google Consumer Surveys is to create a model that benefits everyone," said Google product manager Paul McDonald. "You get to keep enjoying your favorite online content, publishers have an additional option for making money from that content, and businesses have a new way of finding out what their customers want."
Upon further inspection, it does appear that Google may have finally discovered the Holy Grail for monetizing digital content in a way that benefits everyone. Few consumers have a problem filling out short, anonymous surveys, most online publishers have already learned that surveys are a fun way to engage visitors, particularly when it comes to niche sites, and large companies absolutely live and die on the vital data that market research provides regarding emerging trends, and current consumers tastes.
-Jon Udell, via Wired.com/Cloudline (sponsored by IBM)
As we migrate personal data to the cloud, it seems that we trade convenience for privacy. It’s convenient, for example, to access my address book from any connected device I happen to use. But when I park my address book in the cloud in order to gain this benefit, I expose my data to the provider of that cloud service.
When the service is offered for free, supported by ads that use my personal info to profile me, this exposure is the price I pay for convenient access to my own data. The provider may promise not to use the data in ways I don’t like, but I can’t be sure that promise will be kept.
Is this a reasonable trade-off?
For many people, in many cases, it appears to be. Of course we haven’t, so far, been given other choices. And other choices can exist. Storing your data in the cloud doesn’t necessarily mean, for example, that the cloud operator can read all the data you put there. There are ways to transform it so that it’s useful only to you, or to you and designated others, or to the service provider but only in restricted ways.
Early Unix systems kept users’ passwords in an unprotected system file, /etc/passwd, that anyone could read. This seemed crazy when I first learned about it many years ago. But there was a method to the madness. The file was readable, so anyone could see the usernames. But the passwords were transformed, using a cryptographic hash function, into gibberish. The system didn’t need to remember your cleartext password. It only needed to verify that when you typed your cleartext password at logon, the operation that originally encoded its /etc/passwd equivalent would, when repeated, yield a matching result.
Everything old is new again. When it was recently discovered that some iPhone apps were uploading users’ contacts to the cloud, one proposed remedy was to modify iOS to require explicit user approval. But in one typical scenario that’s not a choice a user should have to make. A social service that uses contacts to find which of a new user’s friends are already members doesn’t need cleartext email addresses. If I upload hashes of my contacts, and you upload hashes of yours, the service can match hashes without knowing the email addresses from which they’re derived.
In the post Hashing for privacy in social apps, Matt Gemmell shows how it can be done. Why wasn’t it? Not for nefarious reasons, Gemmell says, but rather because developers simply weren’t aware of the option to uses hashes as a proxy for email addresses.
The best general treatise I’ve read on this topic is Peter Wayner’s Translucent Databases. I reviewed the first edition a decade ago; the revised and expanded second edition came out in 2009. A translucent system, Peter says, “lets some light escape while still providing a layer of secrecy.”
Here’s my favorite example from Peter’s book. Consider a social app that enables parents to find available babysitters. A conventional implementation would store sensitive data — identities and addresses of parents, identities and schedules of babysitters — as cleartext. If evildoers break into the service, there will be another round of headlines and unsatisfying apologies.
A translucent solution encrypts the sensitive data so that it is hidden even from the operator of the service, while yet enabling the two parties (parents, babysitters) to rendezvous.
How many applications can benefit from translucency? We won’t know until we start looking. The translucent approach doesn’t lie along the path of least resistance, though. It takes creative thinking and hard work to craft applications that don’t unnecessarily require users to disclose, or services to store, personal data. But if you can solve a problem in a translucent way, you should. We can all live without more of those headlines and apologies.
more than 100 million active users globally, eBay is the world's largest online
marketplace, where practically anyone can buy and sell practically anything.eBay
connects a diverse and passionate community of individual buyers and sellers,
as well as small businesses.It’s a
meeting-point for eBay buyers and sellers to chat ask questions and exchange
advice and tips with each other. eBay
gives their customers an online chat between buyers and sellers as well as
customers to customer service. This
allows eBay to deal with their customers directly and give them the correct guidance
Athens, Greece - Political power in Athens, Greece, today signed an agreement with representatives for The Pirate Bay (TPB) about exclusive usage of the greek airspace at 8000-9000ft.
- This might come as a shock for many but we believe that we need to both raise money to pay our debts as well as encourage creativity in new technology. Greece wants to become a leader in LOSS, says Lucas Papadams, the new and crisply elected Prime Minister of Greece.
LOSS that he is referring to is not the state of finances in the country but rather Low Orbit Server Stations, a new technology recently invented by TPB. Being a leader for a long time in other types of LOSS, TPB has been working hard on making LOSS a viable solution for achieving 100% uptime for their services.
- Greece is one of few countries that understands the value of LOSSes. We have been talking to them ever since we came up with the solution seeing that we have equal needs of being able to find financially sustainable solutions for our projects, says Win B. Stones, head of R&D at TPB.
The agreement gives TPB a 5 year license to use and re-distribute usage of the airspace at 8000-9000 ft as well as unlimited usage of the radio space between 2350 to 24150 MHz. Due to the financial situation of both parties TPB will pay the costs with digital goods, sorely needed by the citizens of Greece.
Commitment to diversity is just part of what is
called the Nokia Way – the core values and shared philosophy that make their
company tick. Creativity, empowerment,
openness, collaboration, and consideration for people and the environment –
these are all integral to the way they do business. But above all, it’s about
being human in everything they do – respecting and caring, even in tough
business situations. Nokia offers
their customers the option to contact them via email, phone, Facebook, Twitter,
and even live, online chat. This allows
Nokia to give feedback to their customers instantly.
Blizzard Entertainment is a premier developer and
publisher of entertainment software. After establishing the Blizzard
Entertainment label in 1994, the company quickly became one of the most popular
and well-respected makers of computer games.
By focusing on creating well-designed, highly enjoyable entertainment
experiences, Blizzard Entertainment has maintained an unparalleled reputation
for quality since its inception.
Blizzard offers their customers to follow them on Facebook, Twitter, and
YouTube. They also have a forum for each
of their games and technical support.
This allows Blizzard to contact their customers directly to give and
receive information from them in order to better their products and services.
The high-resolution retina display iPad has one downside — normal resolution images look worse than on lower resolution displays. On the web that means that text looks just fine, as does any CSS-based art, but photographs look worse, sometimes even when they’re actually high-resolution images.
Pro photographer Duncan Davidson was experimenting with serving high-resolution images to the iPad 3 when he ran up against what seemed to be a limit to the resolution of JPG images in WebKit. Serving small high-resolution images — in the sub-2000px range — works great, but replacing 1000px wide photographs with 2000px wide photos actually looks worse due to downsampling.
The solution (turns out) is to go back to something you probably haven’t used in quite a while — progressive JPGs. It’s a clever solution to a little quirk in Mobile Safari’s resource limitations. Read Davidson’s follow-up post for more details, and be sure to look at the example image if you’ve got a new iPad because more than just a clever solution, this is what the future of images on web will look like.
As Davidson says:
For the first time, I’m looking at a photograph I’ve made on a screen that has the same sort of visceral appeal as a print. Or maybe a transparency laying on a lightbox. Ok, maybe not quite that good, but it’s pretty incredible. In fact, I really shouldn’t be comparing it to a print or a transparency at all. Really, it’s its own very unique experience.
But how could you go about serving the higher res image to just those screens with high enough resolution and fast enough connections to warrant it?
So what’s a web developer with high-res images to show off supposed to do? Well, right now you’re going to have to decide between all or nothing. Or you can use a hack like one of the less-than-ideal responsive image solutions we’ve covered before.
Right now visitors with the new iPad are probably a minority for most websites, so not that many people will be affected by low-res or poorly rendered high-res images. But Microsoft is already prepping Windows 8 for high-res retina-style screens and Apple is getting ready to bring the same concept to laptops.
The high-res future is coming fast and the web needs to evolve just as fast.
In the long run that means the web is going to need a real responsive image solution; something that’s part of HTML itself. An new HTML element like the proposed <picture> tag is one possible solution. The picture element would work much like the video tag, with code that looks something like this:
<picture alt="image description">
<source src="mobile.jpg"> <!-- Matches by default-->
The browser uses this code to choose which image to load based on the current screen width.
The picture element would solve one part of the larger problem, namely serving the appropriate image to the appropriate screen resolution. But screen size isn’t the only consideration; we also need a way to measure the bandwidth available.
At home on my Wi-Fi connection I’d love to get Davidson’s high-res images on my iPad. When I’m out and about using a 3G connection it would be better to skip that extra overhead in favor of faster page load times.
Ideally browsers would send more information about the user’s environment along with each HTTP request. Think screen size, pixel density and network connection speed. Developers could then use that information to make a better-informed guess about which images it to serve. Unfortunately, it seems unlikely we’ll get such tools standardized and widely supported before the high-res world overtakes the web. With any server-side solution to the bandwidth problem still far off on the horizon, navigator.connection will become even more valuable in the mean time.
Further complicating the problem are two additional factors, data caps on mobile connections and technologies like Apple’s AirPlay. The former means that even if I have a fast LTE connection and a high-resolution screen I still might not want to use my limited data allotment to download high-res images.
AirPlay means I can browse to a site with my phone — which would likely trigger smaller images and videos since it’s a smaller screen — but then project the result on a huge HD TV screen. This is not even a hypothetical problem, you can experience it today with PBS’s iPhone app and AirPlay.
Want to help figure out how the web needs to evolve and what new tools we’re going to need? Keep an eye on the W3C’s Responsive Images community group, join the mailing list and don’t be shy about contributing. Post your experiments on the web and document your findings like Davidson and countless others are already doing.
It’s not going to happen overnight, but eventually the standards bodies and the browser makers are going to start implementing solutions and the more test cases that are out there, the more experimenting web developers have done, the better those solutions will be. It’s your web after all, so make it better.
Inspiring Innovation Persistent
Perfection (IIPP) is the ASUS brand promise. It symbolizes our commitment to
making life better through innovation, and our belief that life-changing shifts
can only be achieved by keeping ahead of the curve and not resting on past
successes. Through the years,
ASUS’ visionary approach has seen it become a major proponent in consumer
technology, bringing quality innovation and design into consumers’ lives. Asus offers their customers an ability to
follow them on Facebook, Twitter, and YouTube.
In addition customers can contact them and other customers through their
online community forum allowing customers to interact with each other and Asus
Microsoft wants in on the drive to speed up the web. The company plans to submit its proposal for a faster internet protocol to the standards body charged with creating HTTP 2.0.
Not coincidentally, that standards body, the Internet Engineering Task Force (IETF), is meeting this week to discuss the future of the venerable Hypertext Transfer Protocol, better known as HTTP. On the agenda is creating HTTP 2.0, a faster, modern approach to internet communication.
One candidate for HTTP 2.0 is Google’s SPDY protocol. Pronounced “speedy,” Google’s proposal would replace the HTTP protocol — the language currently used when your browser talks to a web server. When you request a webpage or a file from a server, chances are your browser sends that request using HTTP. The server answers using HTTP, too. This is why “http” appears at the beginning of most web addresses.
The SPDY protocol handles all the same tasks as HTTP, but SPDY can do it all about 50 percent faster. Chrome and Firefox both support SPDY and several large sites, including Google and Twitter, are already serving pages over SPDY where possible.
Part of the IETF’s agenda this week is to discuss the SPDY proposal, and the possibility of turning it into a standard.
But now Microsoft is submitting another proposal for the IETF to consider.
Microsoft’s new HTTP Speed+Mobility lacks a catchy name, but otherwise appears to cover much of the same territory SPDY has staked out. Though details on exactly what HTTP Speed+Mobility entails are thin, judging by the blog post announcing it, HTTP Speed+Mobility builds on SPDY but also includes improvements drawn from work on the HTML5 WebSockets API. The emphasis is on not just the web and web browsers, but mobile apps.
“We think that apps — not just browsers — should get faster,” writes Microsoft’s Jean Paoli, General Manager of Interoperability Strategy.
To do that, Microsoft’s HTTP Speed+Mobility “starts from both the Google SPDY protocol and the work the industry has done around WebSockets.” What’s unclear from the initial post is exactly where HTTP Speed+Mobility goes from that hybrid starting point.
But clearly Microsoft isn’t opposed to SPDY. “SPDY has done a great job raising awareness of web performance and taking a ‘clean slate’ approach to improving HTTP,” writes Paoli. “The main departures from SPDY are to address the needs of mobile devices and applications.”
SPDY co-inventor Mike Belshe writes on Google+ that he welcomes Microsoft’s efforts and looks forward to “real-world performance metrics and open source implementations so that we can all evaluate them.”
Belshe also notes that Microsoft’s implication that SPDY is not optimized for mobile “is not true.” Belshe says that the available evidence suggests that developers are generally happy using SPDY in mobile apps, “but it could always be better, of course.”
The process of creating a faster HTTP replacement will not mean simply picking any one vendor’s protocol and standardizing it. Hopefully the IETF will take the best ideas from all sides and combine them into a single protocol that can speed up the web. The exact details — and any potential speed gains — from Microsoft’s HTTP Speed+Mobility contribution remain to be seen, but the more input the IETF gets the better HTTP 2.0 will likely be.
Expert Labs, the non-profit organization behind ThinkUp, a web-based data-liberation and analytics application, is rebooting into a commercial entity.
No need to panic if you use ThinkUp to back up your social network life; the application will remain open source and freely available.
But Expert Labs is going away and ThinkUp is refocusing on a larger goal — liberating your online social life from the clutches of corporate web entities.
In its own words the new ThinkUp wants to build “an information network that connects to today’s social networks, but isn’t centralized and dependent on a company or investors.”
That’s not an entirely new idea. Diaspora and some other projects are trying to do the same thing, but ThinkUp is taking a different approach — it wants to build an app first and focus on the user experience rather than the underlying technology.
In fact ThinkUp already is an app that’s pretty close to what it’s aiming to do. ThinkUp is a web-based app that pulls your data out of social silos like Facebook or Twitter and stores it on your own server. You control your own data, and have a record of your conversations potentially long after Facebook, Twitter and the rest have become mere footnotes in the history of the web.
For more on how ThinkUp works and how you can use it be sure to check out our earlier coverage and then grab the code and try it for yourself.
So what of ThinkUp’s new, loftier goals? Is any attempt to replace Facebook doomed to failure? Of course not. Everything is replaceable, just ask MySpace. And ThinkUp believes its approach is different. “Prior attempts have tried to solve this problem based on the assumption that it is a technical challenge,” says ThinkUp’s Knight News Challenge application. “We believe it to be a social one.” ThinkUp’s focus going forward will be on the social and the interface:
We will draw people in through a compelling media site that encourages participation via our decentralized platform… a peer-to-peer network that powers a great media property with broad appeal — imagine if Digg or Reddit were open, decentralized and powered by a network instead of votes.
If you’re curious to know what that might look like, head on over to the ThinkUp proposal for the Knight News Challenge and click the heart icon to “like” it (incidentally if the Knight New Challenge sounds familiar, that might be because it’s also the birthplace of EveryBlock). In the meantime, work on the ThinkUp app continues with a new release that improves the charts and graphs and paves the way for the coming Foursquare support. Check out the ThinkUp GitHub page for more details.
Samsung has been dedicated to making a better world through
diverse businesses that today span advanced technology, semiconductors,
skyscraper and plant construction, petrochemicals, fashion, medicine, finance,
and hotels. It leads the global market
in high-tech electronics manufacturing and digital media. Through innovative, reliable products and
services; talented people; a responsible approach to business and global
citizenship; and collaboration with their partners and customers; Samsung is
taking the world in imaginative new directions.
They allow their customers to contact them through Facebook, Twitter,
Google+, email, and phone. They also
have a window that is on their services page to start chatting with a
Yahoo has announced it will soon support the Do Not Track privacy header across its sprawling network of websites. Supporting Do Not Track means you will soon be able to easily tell Yahoo to stop tracking your movements around the web.
Much like the Do Not Call registry, the Do Not Track system offers a way to opt out of this third-party web tracking.
The Do Not Track header began life at Mozilla, but has since moved to the W3C where it was converted into a web standard by the Tracking Protection Working Group.
The Do Not Track header now works in every major desktop browser except Google Chrome, though none of them turn it on by default. Still, for privacy-concerned users savvy enough to enable Do Not Track, the header offers a quick and easy way to tell advertisers that you don’t want to be followed while you browse the web.
Numerous online advertising groups already respect the Do Not Track header and refrain from tracking users that enable it. Today’s announcement means that, starting this summer, you can add Yahoo to the list of companies that will stop tracking you if you’ve enabled Do Not Track in your web browser.
Of course, there are still many advertisers and websites that don’t yet support Do Not Track. If you’re concerned about your online privacy and don’t want to rely on the goodwill of advertisers, there are other, more aggressive steps you can take to limit how your tracked on the web.
In “The hidden risk of a meltdown in the cloud,” a Technology Review blogger reacts to a paper by Bryan Ford on “the unrecognised risks of cloud computing.” I don’t know, the risks seem familiar to me. Beyond security, they are:
Unpredictable interactions among loosely-coupled services
Inability to preserve or reproduce an application or data set
The Technology Review blogger, who is evidently known only by the nom de plume Kentucky FC, echoes Ford’s conclusion: We ought to study these risks “before our socioeconomic system becomes completely and irreversibly dependent on a computing model whose foundations may still be incompletely understood.”
OK, yes, we should study the risks. But that doesn’t mean we can’t engage with the cloud while doing so. It isn’t an all-or-none proposition.
Think about our relationship to the power grid. We are, in fact, irrevocably committed to it. And it is prone to occasional dramatic failures. I have a few friends who live off the grid, but most of us plug in, and then some hedge their bets to varying degrees. Do you own a generator? If so, how much of your demand does it power? And for long? An hour? A day? A week?
For enterprises, a hybrid strategy that blends cloud and on-premise resources is gaining traction. That’ll make sense for individuals too. Our personal clouds encompass resources both in the sky and scattered across our own devices. As we extend into the cloud we’ll learn how to use it to complement the strengths and offset the weakness of our local setups. There is, as always, a continuum of risk and benefit. We’ll make personal choices to occupy points along that continuum. And those points will drift over time.
Meanwhile, let’s consider one analogy drawn by Bryan Ford and echoed by Kentucky FC.
Ford: Non-transparent layering structures … may create unexpected and potentially catastrophic failure correlations, reminiscent of financial industry crashes.
KFC: The cloud could suffer the same kind of collapses that plague the financial system….
It’s true that the unpredictability of complex interaction is a similar concern in both realms. But when things have gone wrong, cloud providers have been refreshingly open about it. Consider the post-mortems for some notable Amazon Web Services (AWS) and Azure outages. Both set a high standard, explaining what went wrong, why, how it was fixed, and what steps are being taken to prevent a recurrence.
We can only dream of a financial industry that runs as transparently, and holds itself to such a standard.