Modified on by pentago
For every successful app on the market, there are thousands of services and products that simply could not make it. So what’s the secret of a popular app? What makes some app developers struggling to increase their income, and others to make millions instantly? Of course, it’s well worth outsourcing to the app developer with experience and knowhow. One more reason is to understand the monetization strategy behind your app as early as you can. In tandem, the best app developer and great monetization strategy will significantly increase your chances for a successful app.
The vast majority of apps that you can find on App Store or Google Play are free to use. It means that although your initial goal of making your app popular may be accomplished, you are still not making money on it, or at least not as much as you wanted. Pokemon Go is a brand new viral mobile game that can be a great example of how a hip app that is free-of-charge can make a lot of money for its owners as well many other businesses. This mobile phone game is based on augmented reality (AR) technology that employs real world location via your phone’s camera to show virtual Pokemon on a display.
The economics behind Pokémon Go app is free-to-download game. By offering a free app you can easily raise the number of people who use it and create a buzz. For instance, according to Forbes - HootSuite (freemium product), you’ll find 10M results. If you search for Sendible (premium only competitor), you’ll find 160K results. Similarly, if you search for mentions of MailChimp (freemium product), you’ll find 10M results. If you search for Aweber (premium only competitor), you’ll find 718K results. Despite that not every single aspect of such game is completely accessible without paying. It offers you to make some purchases during the game like in-game currency PokéCoins that players use to buy helpful items, such as Poké Balls in order to “catch” Pokémon, and for inventory upgrades. The key issue here is to set up proper and well-balanced incentives for these in-app purchases, as it may become a viable strategy to drive revenue. There are several ways to implement in-app purchases into free-to-download app: user can buy additional time or items in games, functionality or general usage. Worldwide revenue for in-app purchases is expected to hit about $58 billion by the end of 2016 and to skyrocket to $76.5 billion in 2017 according to Statista portal.
Companies widely employ geolocation as an advertising technology, meaning when you walk by a partner business or you are located in a particular part of the city, your app users will see a unique promo message. This is applicable to any kind of app, either a game or business tool. Wouldn’t it be great that your clients receive a pop-up saying that your partner is nearby and can offer his services to your user or a shop nearby offers a 10% discount to those using your app? You can follow suit and use geolocation in your app to provide bonuses or coupons when people enter your partner business.
According to The App Solutions, a mobile app development company, location-based advertising is one of ‘most exciting’ mobile opportunities now because of games like Pokémon Go, location-based ads should get a boost in augmented-reality gaming in 2016 and further on. The data by The AppSolutions shows that that 8 of 10 consumers prefer using their smartphones for making a final decision before purchasing, so it seems that it is worth to consider mobile geolocation technology to drive revenues.
Another way Niantic Inc, the developer of the app, can make money is by creating a companion device that improves the gamers’ experience. If you choose to follow this path to promote your app, then this device, like the Pokemon Go Plus, should be customizable and branded with your app logo for it to be easily recognized. Such feature lets your active app users improve their overall experience and keep using it even more constantly. Many other companies like Fitbit and Nike have used this before, but vice versa. They created a device followed by an app to improve user experience. Whatever way you choose, there is money to be made.
You can also partner up with existing businesses to use their technology or hardware to accommodate your app. Mobile entertainment has always been attached to the mobile devices. But if you look at recent development with the Samsung Gear or Apple Watch, for instance, their wearable technology magnifies the value of the apps on their phones dramatically. So feel free to find a hardware provider you can partner up with and use their wearable tech to improve user experience and make extra money.
Paid app & real life PokeSpots
One more option to monetize your app is to create an app for the app, however crazy it may sound. With Pokemon Go, this task is not that easy like with any of your average apps because it uses the new AR tech. In order to succeed in this game, you have to move around a lot to find your next Pokemon or PokeSpot. The app on its own does not have a map detailing these locations, but there have sprung apps that actually offer a detailed map for a small fee. Creating a paid app for a free app to improve user experience is something not that new but easy to implement.
While using Pokemon Go AR you have to actually move around a lot and visit physical spaces that can be a café, McDonalds, restaurant or mall. Taking into account the viral nature of this app many businesses can order from Niantic Inc. a training ground that can be located on their premises. This draws people who are playing the game in and they can make purchases on the go, or can stay on and battle it out with others on their mobile phones. Sales of snacks, coffee, branded cups and other merchandise can be greatly boosted with such an option.
What Niantic Inc and their partners do is not game changing. They use up-to-date viral technology to rake in profits, both passively (by increases in stock prices) and actively (by engaging various revenue harvesting techniques). Most importantly they created a franchise that is a win-win situation for the app owner and their partners, small and large businesses that increase their social presence in their neighborhood or sell hardware to your clients.
This game in its essence is pretty repetitive, so the company will be introducing new features and levels along the way. The same can be true for your app, you don’t need to launch all the monetization methoods instantly – choose the simplest tools and grow along the way.
Utilizing the latest technology like AR combined with different ways of monetization of your app seems to be one of the ways that lead to powerful and accelerating business. Alas, there is no ABC way to do that. Hopefully, the above advices will give you some ground to think about.
Modified on by pentago
KVM guest performance can be improved knowing and selecting the guest caching mode that is the best for your environment. It will help the operating system to maintain the page cache so that the storage I/O performance can be improved.
When the data gets copied on the page cache the write operations to the storage system are considered to be completed. If the data requested to read is present in the cache then page cache can satisfy its read operations.
fsync(2) is used to copy the page cache to the permanent storage whereas page cache is bypassed by the direct I/O. while using the environment of Kernel-based Virtual Machine page caches can be maintained by both, the host and guest operating systems, to provide two copies of data to the memory of the system.
Normally one of these page caches is bypassed to improve KVM guest performance. For instance, if the direct I/O operations are used by the application running in the guest then it will be better to bypass guest page cache.
If no cache is set in the guest then to turn all the I/O operations from guest to direct I/O operations on the host it will be better to bypass host page cache.
The performance of write operations to the storage system can be improved by planning the disk write cache. Even though the data is not physically transferred to the disk media but still the write operation is considered to be completed by just reaching to the disk write cache.
But there is a risk of losing the data by the disk write cache if the cache is not supported by battery backup and there is a power failure. The applications must issue fsync(2) to ensure the actual transfer of write data on physical disk media.
Normally the write performance gets significantly improved by enabling disk write cache but incase of power failure the protection and integrity of the data can be ensured only if the storage stack and the applications correctly transfer the cache data to the permanent storage.
On the other hand the write performance may suffer if the disk write cache is disabled in case of power failure but it will considerably lessen the risk of data loss, which is a good point.
Some resources regarding KVM guest performance:
Usage of Virtio device drivers
I’ve been able to improve my agency’s VPS (FreeBSD guest) performance before we turned to MacquarieTelecom’s secure hosting for government agencies.
Information about the caching modes used by Red Hat enterprises using Linux 6 for improving KVM guest performance is provided hereunder.
This default caching mode enables the host page cache for the guest but disables the write cache for disk. As a result this caching mode keeps the integrity of the data safe even if it is not properly transferred completely to the permanent storage by the applications and storage stack by using file system barriers or fsync operations.
The read performance is generally better for applications running in the guest as the host page cache is enabled in this mode. But the disabling of disk write cache adversely affects the KVM guest performance in write operations.
This caching mode enables both, the disk write cache and the host page cache, for improving the KVM guest performance. Though it improves the I/O performance for applications running in the guest but there is a risk of data loss as it is not protected from power failure. Thus this caching mode is recommended to use where potential amount of data is not required to be transferred to permanent storage.
This caching mode enables the disk write cache for the guest but disables the host page cache. This caching mode improves the KVM guest performance to its maximum because the host page cache is bypassed by the write operations and the disk write cache directly receives the data.
The integrity of data can be ensured with this caching mode if the disk write cache is supported by battery backup or the data is properly transferred by the applications or storage stack by using file system barriers or fsync operations.
But the KVM guest performance with this caching mode may not be improved to the level of the modes with enabled host page cache due to its disabled host page cache.
Cache transfer operations are completely ignored with the unsafe caching mode. This caching mode is recommended to be used for temporary data transfers only where the risk of data loss can not disturb the quality of operation, as its name suggests unsafe. Though it can be used for speeding up the installation of guests but to improve KVM guest performance you should opt for other caching modes.
To conclude, I recommend using one of the caching modes (depending on your scenario) that enable the host page cache like writethrough mode to improve the KVM guest performance for local or directly attached storage.
This mode can ensure the integrity of the data due to their acceptability to the I/O performance for applications, especially for the read operations, running in the guest.
The cloud is transforming the world of business, and if your business isn’t yet on board, you’re running late for the revolution. The cloud harnesses the potential of always-on connectivity and lightning-quick responsiveness, and it has created a space in which a new industry of cloud services are thriving.
Companies the world over are leveraging the advantages that these new services have to offer, and they’re seeing returns in virtually every aspect of their operations. If you haven’t yet made a move to the cloud, here’s a look at what you’re missing.
Cloud services take advantage of the principles involved in economies of scale. Economies of scale are, put simply, the reduced costs that come along with spreading the costs of an enterprise over a large area – as a cloud service increases its client base, the cost per client decreases.
Thus, instead of investing in a network infrastructure capable of handling your company’s computing and storage needs, cloud services allow you to subscribe to a demand-based system that allow you to pay only for what you use. Whether it be additional features or added storage capacity, the monthly price scales up or down based on demand.
Cloud services also take on the responsibilities – and the associated costs – of system and software updates and upgrades. Not only does this shift the duties away from your in-house IT department, but it also dramatically speeds up the rate at which they’re deployed.
The same principles that reduce the costs of cloud services also enhance the quality of the services that they can provide. Cloud services are typically capable of offering a level of security far superior to that which any one of its clients could achieve within the confines of its own staff and budget.
Major cloud-based service providers utilize hardened data centres to ensure the protection of its clients’ data. These facilities employ state-of-the-art firewalls, leading-edge encryption techniques, and even armed guards.
This protection extends to the integrity of the data as well. For example, Praktika provides you with automated backups that eliminate the need to backup and store your data locally, further reducing the hardware costs and payroll hours associated with these vital, yet time-consuming tasks.
Cloud services are based on a software delivery model called Software as a Service – also known as SaaS. In this model, the software is maintained in a single location and accessed remotely using a standard web browser. As they utilize interfaces similar to any typical web application, employees typically require very little training to become proficient in their use.
This ease of use has significant advantages when it comes to their adoption and deployment – processes that once took months to complete and were often followed by intense periods of training and troubleshooting – but that’s only the beginning. Productivity is noticeably improved by systems such as these.
It’s not hard to see why – employees are able to access the service from any location with Internet access. Whether they’re at home or on the road, team members can communicate and collaborate in real time in the same virtual space. The boardroom has gone digital, and the conference table is now nothing more than a tablet.
The Cloud is Rising
The era of cloud computing has only just begun, but businesses are already clamouring to gain the competitive advantages that it offers. Indeed, there are burgeoning businesses that are basing entire business models upon the availability of cloud services.
This, of course, should be a clear warning to any company that hasn’t yet begun to consider the advantages of the cloud. Its benefits can be leveraged for you, but they can also be used against you.
Indeed, the biggest of companies are taking even bigger steps into the world of cloud computing. Entire infrastructures are being designed and deployed to serve as private clouds, complete with business-centric software solutions and in-house development teams..
The days of inflated licensing fees, bug-ridden software and long-delayed patches are over. Efficiency and agility are the name of the game in the world of cloud computing, and the cost reductions are simply too significant to overlook.
With expenditures on cloud computing expected to clock in at over $106 Billion in 2016, competition between cloud services will only continue to improve their costs and capabilities. There’s no better time than now to take a step into the cloud.
Modified on by pentago
Currently, there are over 3.9 million jobs in America that are associated with cloud computing and out of these, 384, 478 are in information technology alone! IT professionals armed with cloud computing experience take home a median salary of $90,950. Internationally, there are a staggering 18,239,258 cloud computing jobs, with China accounting for the largest amount of these jobs, with a staggering 40.8% of the industry located in China.
These other important insights that we have learnt through WANTED Analytics, that specializes in providing data analytics especially on particular workplaces and industries. Presently, the companies database that contains over 1 billion job listings and is has a database documenting and collecting information on workplace (hiring) trends from over 150 countries.
Most Wanted Computing Certifications:
To land these top jobs, it's important to have an idea of what exact qualifications are required. The information gathered outlines that it is important to invest in certificates such as; Project Management Professional or PMP, Top Secret Sensitive Compartmented Information, Cisco Certified Network Associates or CCNA and Certified Information Systems Security professional. If want to advance your qualification and get your dream IT job in 2015, make sure that to invest in one of the above certificates to ensure that you are in with a leading edge.
Number of IT Jobs:
It is important to have a clear understanding of the employment options within the industry. Analytics completed and gathered, outline that the industry currently has 1,533,742 job openings globally! As previously noted, China is leading the employment force with 40.8% of the jobs being based in China. US is second highest employer in the field with 21.7% of the jobs being located across the US. India comes in third place accumulating 12.2% of computing and IT jobs.
Organizations Occupying the Workforce:
The top three (3) worldwide organisations leading the IT workforce include; IBM, Oracle, and Amazon. These three companies are currently leading the IT employment sector. Other companies that are renowned in the IT world include; General Dynamics, Dell, Accenture, Well Point Inc, J.P Morgan Chase & Co, Computer Sciences Corporation, Deloitte, Wells Fargo and Lockheed Martin. The companies listed can be viewed as the trendsetters in the IT employment sector.
WANTED Analytics depict a prospective 2015, with predictions and analysis that indicate a significant increase in the demand of IT related jobs. With this information at hand, the need for qualifications is becoming increasingly important.
With the above outlined information, you are now equipped to ensure that your are concentrating your efforts in the right direction to establish a prosperous and fruitful career in 2015.
With an October 2015 deadline for US retailers and hospitality operators to migrate over to accepting EMV chip cards, current estimates are that at least $8.65 billion is being spent to prepare for the shift. But is that really the main reason customers are purchasing or upgrading a POS system? Some analysts say no -- with most of the world already operating with chip-and-pin cards, security and mobile payments are an even bigger factor. Especially for restaurants, being able to accept the latest payment options appears to be the most important benefit of a POS upgrade.
New Functionality is Hospitality Providers' Main Desire
While usability and reliability still appear to be the main factors hospitality operators use in making POS purchasing decisions, the same isn't true for POS upgrades. According to Hospitality Technology magazine, most hospitality operators looking to upgrade their systems are focused on being able to accept new payment options like mobile payments: 56% of restaurants cited "enabling new payment options" as the main consideration in their upgrade, 9% more than the next-most popular drivers (adding mobile POS functionality and preparing for the US EMV rollout). Both suppliers and restaurant operators agree that the ability to accept the latest mobile wallet payments is having a major impact on the market. At the same time, maximising security and preparing for EMV are almost as important (with 47% of restaurants citing EMV as a reason to upgrade and PCI security compliance being an issue for 45% of restaurants surveyed). Only about a quarter of restaurants were particularly concerned with integrating their POS with other systems.
Upgrades More Important than New Hardware
67% of the restaurants surveyed by Hospitality Technology said their goal at the moment is to upgrade their existing POS solution, rather than purchase a new one. Only 19% were planning to put in a POS solution from a new vendor, suggesting supplier relationships are fairly stable. Still, with 38% looking at new POS solutions which they might install after 2015, the POS industry could be seeing a change on the horizon.
Mobile Payments, Loyalty Tools, Tablet-based Software as the Most Popular Features
Among the features most restaurants are looking out for, mobile wallet functionality tops the list -- 59% of restaurants are looking to add the feature in 2015. Close on its heels are loyalty tools and tablet-based software which employees can use as they walk around. Social media integration, on the other hand, comes in at just 33%, along with many other features such as centralised POS and inventory management. Overall, the picture suggests that restaurants are looking for flexible systems that will "work the way they do" in taking orders and processing payments. Still, security is also playing a perhaps as yet unrecognized role.
Security as an Increasingly Large Driver
According to SAIC CIO Bob Fecteau, as quoted in the Wall Street Journal, payment security may be one of the largest "hidden trends" in the POS market. In his view, 2015 may see "a whole new level of security" start to take shape. Why? Unless banks and businesses don't tackle the challenges in current POS technology that are being so frequently exploited by criminals, the resulting financial impact is likely to be "significant." The crucial issue with the new mobile payment services such as Apple Pay will be how to keep them secure and avoid losses.
The rising popularity of new payment technologies comes at a time when criminals are ramping up their assaults on POS systems and mobile devices, according to Verisign iDefense Security Intelligence Services. Their 2015 "Cyber Trend and Threat Analysis" number-one top prediction is increasing attacks on mobile and POS technology. Their researchers have observed attackers developing new software to attack mobile platforms and POS devices. Despite law enforcement agencies' best efforts, the US alone is estimated to lose around $8.6 billion in credit card fraud each year. At the same time, the EMV shift means that merchants who use POS systems which are not EMV compliant but who take EMV cards accept liability for any fraudulent transactions.
For many merchants this is not much of an issue, of course: industry estimates are that 70% of the POS terminals outside the US are EMV compliant, while 40% of the cards in worldwide circulation support EMV. The highest adoption rate is in Europe (with 96% of card-present transactions using EMV), followed by Canada, Latin America, and the Caribbean. The Asia-Pacific region (including Australia) has 71% of terminals supporting EMV, but just 17% of cards.
Visual elements work better when trying to reach an online audience. We’re seeing an increase in user engagement when videos, photos and images are used as part of a marketing campaign. Aside from videos and memes, infographics are handy for conveying important messages and critical data.
Image credit: Pixabay
Marketing with infographics requires a peculiar approach. Instead of designing the campaign the way you would with photos or other types of visual cues, there are a few extra things you need to do and several more aspects to consider. We are going to take a closer look at those aspects in this ultimate guide to marketing with infographics.
Research, Data and Key Messages
One of the very first things you need to prepare when you’re creating an infographic for marketing purposes is to get your facts straight. There is no room for error in this type of marketing campaign; your viewers will focus on the facts and data you post in the infographic, which means the slightest mistakes will be more apparent.
It is also important to select the right messages to add to the infographic. You don’t have to display everything; in fact, doing so will reduce the effectiveness of the infographic. Focus on key messages and try to create a storyline that viewers can actually follow as they scroll through the infographic.
Style and Design Elements
You have complete freedom over how the infographics are styled or designed. Although there are a number of templates and common forms that can be followed, there is no need to stick to a set of rules. In fact, creative agencies have been very experimental with the way they design infographics for clients.
Spiel Creative, for instance, is pushing forward animated infographics and unique design elements. Other creative agencies are taking different routes to make the infographics they produce stand out from the rest.
Despite the style or design elements you choose to use, always make sure that the infographics are still in line with your branding. Use a limited set of colours and direct focus to the key messages by using just enough visual elements. Don’t add more pictures or design elements unless each of them has a specific purpose.
Keep Your Target Audience in Mind
Similar to written communications, visual cues may have different meanings to different audience groups. It is imperative that you keep your target audience in mind when producing infographics for marketing purposes.
Infographics are meant to be trendy and clear at the same time. You can use slang or visual elements that relate well to your target audience; in fact, this will help enhance the effectiveness of the infographics substantially. Don’t forget that a good infographic must have a narrative that the audience can follow.
At this point, you should be able to design and produce an infographic that works really well as part of a marketing campaign. All you have to do next is share the infographic and promote it in order to reach the right target audience. Be sure to evaluate the infographic before you move to designing the next one. This way, you can continue to improve and have a very powerful marketing tool at the end of the process.
Modified on by pentago
Whenever I’m preparing to go for a vacation or a holiday travel, there are two essential devices that I cannot afford to leave behind; my tablet and my phone. Yes, I’m hooked up and I rely heavily on those. Just like you.
In fact, what worries me the most during such times is whether my phone or tablet has ample space for all the photos I would take there; my laptop is not part of the plan and I always leave it behind even though I spend majority of my working hours on it.
As the way we use our devices today has changed both industry and us personally in a great way I decided to dedicate some time and write about impact of the mobile technologies on the world we know.
1. By the year 2013, mobile phones overtook PC’s to become the most common Internet access devices across the globe.
The digital world has come a long way right from the era of green screens which were finally replaced by PC’s that also had green screens. It took almost a decade before color screen PC’s were found in the homes of average users.
After this breakthrough, the web browser became an essential element in performing many of our day to day work-related activities, though it came to pass after several years of endless inventions and innovations.
Fortunately, we are in the prime of mobile transformation where any average person can find all manner of applications on their smartphone. For example, now you can access multiple email accounts on your mobile phone.
2. In 2008, the mobile media made history in the communications industry as the first sector to hit the $1 billion revenue mark after a period of only 5 years as compared to the Internet which took 16.
As I use my mobile phone to listen to music, read e-books, watch videos, play games and utilize other productivity tools (even manage work servers, YAY!), some people are making good money from me. Since I always have this gadget with me, temptations for impulse buying are always irresistible.
According to a friend from UniqueMobiles, mobile media sector overtook the Internet even before the smart phones have fully penetrated the market, particularly in developing countries where people still use feature phones. In the next five years, there will be massive transformations in the mobile media industry.
3. Today, more than 80% of the population owns a mobile phone
The transformative effect of mobile phones across the globe is just amazing with IBM and Airtel organizing mobile development initiatives for Ghanaian students and improving the economies of developing countries. With 80% of the world’s population owning a mobile phone, developers can rest easy knowing that their work will reach as many people as possible.
4. Americans spend about 2.7 hours daily socializing on their mobile gadget and over twice the amount of that time eating
Many people spend this time sending photos, tweets and instant messages and sharing what they are doing. This has led to the popularity of various niche apps. And you can talk to anyone, whether or not they are on the same device or network.
5. By the year 2014, phone Internet usage will overtake desktop Internet usage
To fellow devs: You’ll need to prepare yourself for this transition by updating all your apps to run on mobile devices and getting native apps for each platform.
6. In 2012, there were over 1.08 billion smartphones out of the 4 billion mobile phones globally
This figure shows that a quarter of phone users worldwide have smartphones, and they will possibly want to have native like capabilities and applications. So developers should work on creating a different interface for every mobile operating system, such as iOS, Android and Windows mobile.
7. It took 7 years for smartphone users to hit the 40 million mark as compared to tablet users who reached 40 million tablets after only 2 years
Although it took several years for smartphones to get to their current platform, the tablet market capitalized on these challenges to grow even faster. Therefore, every developer should be well versed with all tablet platforms other than the usual ones, such as iOS, Android and Windows device.
8. By the year 2011, there were over 400 different types of smartphone devices on the US market, providing the consumer with a wide variety of options to choose from
With over 400 types of smartphone devices, writing custom apps for all these devices may prove untenable for one developer. Although you may decide to focus on 80% of the market, the number of devices you will need to support is still overwhelming. I hear you, responsive webdevs!
The hardest part is writing code and testing all the potential iterations which require a lot of time that you may not have. The best and most effective way to focus your limited resources and time is analyzing the market penetration and concentrating on that.
9. In 2011, smartphone usage almost tripled
Although usage does not necessarily mean users, we should focus on what we use our mobile gadgets for other than texting and emails. For example, I always listen to at least 2 hours of podcasts daily some of which are videos. This tends to change my data plan as it drives so much traffic. But the Internet connection of my tablet is faster than my home ISP and so I can use it to watch my podcasts.
This is what most employees who want to get their work done regardless of their location are yearning for. This great shift is going to transform how we manage our work businesses. With the more and more mobile devices finding their way in the workforce, IBM has decided to address the issue by rolling out BYOD (bring your own device) program. Amazing!
10. In 2011, the size of mobile traffic was eight times than that of worldwide Internet in 2000
This implies that the Mobile world is growing faster and transforming every IT aspect. Therefore, it is important for every developer to learn more about it to keep themselves updated on any new development. And this is the major reason why the Impact 2013 was a must-go event for every developer.
Most developers were delighted to attend the conference as it gave them an opportunity to meet fellow developers, interact with the product managers and learn from customers on how they are dealing with these challenges.
Modified on by pentago
OK, here’s the funny fact. Monitors suck. They’re too small and those bigger but quality ones are way to expensive. It’s essential for modern, productive developer to work on multiple screen devices to avoid frustration of constant moving (ALT+TAB hell) windows of text editors/web browsers/documentation around up to the point where your time spent doing that can be actually spent smarter, working.
I decided to move all my web development to a different kind of local. To the Raspberry Pi. It’s small, cheap, versatile, features active development community and most importantly it works. Also, very helpful for presentations in conference rooms with large TV (my current company setup).
Chucking in Debian, assigning it it’s own IP, plugging a fast external drive to it (auto backed up to main PC nightly) and installing all possible tools I need and might have in the future, including web/database server and Git. Fully custom, neat workstation. Works magic for me.
An introduction to introduction:
A Brief Device Profile
The nifty credit card sized $25 -$35 microcomputer takes us back to the 1980s and the time of 8-bit computing. It’s a Linux based, cheap device that doesn’t come with a monitor. You have to plug it to your TV, fit in a keyboard and a mouse, connect it to a power source, add an operating device and storage and you have a computer.
The computer started out with the idea of getting kids interested in computer science. It doesn’t have a hard disk or SSD, but it uses an SD card to boot and offer some storage space. Since it hit the market in 2012, the computer has also become popular with programmers looking for a handy and cheap device to test their projects. 500,000 of the sets were sold by September, 2012.
In fact, there’s even a version of Minecraft for the Raspberry Pi. Imagine the geeky thrill of a long rail craft ride (or navigating the Nether hellfires) on your TV screen and you may want to know how to connect the device to your TV.
Connecting To The Television
The option of connecting your Raspberry Pi to the television makes it very flexible to use. Don’t be fooled by the size of the device. The microcomputer has three output ports for visual output: HDMI, RCA and VGA.
Here’s a look at how you can plug the microcomputer to the television through each of the three ports.
The great thing about the little device is that it comes with an HDMI port. Most people today own televisions that have an HDMI port. If yours has one, all you have to do is to connect your device to the HDMI port of your TV with a cheap cable that you can get for a few dollars. This means you can connect the device to your TV set in the living room.
If you have a flat screen TV in the bedroom, that too will have an HDMI connector, so you can comfortably play Minecraft while lying in bed! In fact, if you own the microcomputer, the pieces of equipment that are must-haves apart from a power supply are an SD card and an HDMI cable. With the cable, you can connect the device to just about any PC monitor and TV available today.
But what if you don’t have an HDMI port on your TV? There are other options for you.
2. HDMI To VGA Adapter
If the monitor you want to connect to doesn’t have an HDMI port, check to see if it has a VGA connector. This is the D shaped connector that old computers had. If the monitor has a VGA port, then all you have to do is get an HDMI to VGA adapter that is readily and cheaply available.
You’ll also need to make a small change to the config.txt file used by the Pi for booting if you’re using VGA. Here’s how to do that. Pull out the SD card from the device and plug it into the memory card reader slot of your desktop PC or laptop. Open the config.txt file in a text editor and look for the following lines:
Once you’ve found the two lines, uncomment them both. This allows the device to output VGA type visual output through an HDMI adapter. It also lowers the default screen resolution to 640 X 480 to suit VGA display.
You can set the device to output a resolution that is higher than 640 X 480 if you want. To do that, look for these two lines:
Again, delete the hash tags from both the lines. Additionally, in the first line, change '1’ to '2’ and in the second, set '4’ to '16’. When you’ve done that, save the file and safely remove the SD card and putting it back into your microcomputer. Power on and enjoy nostalgic VGA visuals.
Now, I know it’s a bit of an overkill, but I simply have to say this. Firing video to just one screen is not a limit :).
By splitting the HDMI
you'll be able to display your stuff over several displays if your workflow requires it. Some of those splitters are probably a couple of times more expensive than the whole RPi and time invested in setting it up, but there, you do have a choice.
3. RCA Output
The last option that the device gives you to connect to a visual display unit is through an RCA connector. You’ll find the RCA connector right next to the audio port, on the side across the HDMI port. The RCA port is a standard port found on most TV sets made since the 80’s. However, the microcomputer is set to give preference to HDMI, so if an HDMI cable is also connected, it will automatically switch from RCA to HDMI output.
You can change the window display style of your new microcomputer as well, depending on the screen resolution of the monitor you’ve connected to. In fact, if the monitor is not of a high resolution, you may need to this. All you have to do in that case is go to the config.txt file as explained above, change the settings for overscan in the file and configure output to make it compatible with your monitor.
The Raspberry Pi is clearly a flexible device that users have found many other cool uses for. Want a digital picture frame but it’s too expensive? You can simply convert the device into a picture frame at half the price or also have it display weather reports and movies as well! Or use it to overclock your PC and create a synced MIDI and Christmas Lights affair. You can check out cool projects to create with the device here.
But simply looking for a way to connect to a monitor? You may already have that RCA cable lying around somewhere or an HDMI cable that could have you connected to a microcomputer media center in minutes. Also check out this great little (but detailed) unofficial tutorial to teach you the basics of what you can do with this surprisingly resourceful little device.
Ten years ago, most of us would not have even been able to imagine the existence of 3D printing, much less all of the practical applications of this amazing technology. Some of the exciting new ways in which 3D printing is revolutionizing the world we live in include new applications in medicine, industry, and even the way we use water.
This video demonstrates the potential of 3D printing for improving the lives of thousands by developing high-quality prosthetics for a fraction of the cost. One California company, Not Impossible Labs has taken this technology to war-torn Sudan to help alleviate the suffering of amputees. Training the locals in how to operate the machinery, they created and fit customized prostheses, helping those without resources to regain mobility. In addition to prostheses, researchers are also using 3D printing to develop potentially life-saving implants such as heart valves.
Surgeons have used 3D printing to create substances that replace human bone, and have even successfully reconstructed a severely damaged skull. The possibilities aren't limited to our physical bodies, though. Chemist Lee Cronin believes that one day it will be possible for people to purchase chemical blueprints and ink and print their own medications at home!
According to one article, two thirds of all top manufacturers use 3D printing in some of their processes. The majority of them are using its capabilities to create prototypes of new products because it is faster and less costly. However, 10% of manufacturers have found ways to successfully incorporate it into the actual production process. 3% reported that their products couldn't be made without 3D printing technology. Based on the current growth rate, the $2.5 billion in 2013 is expected to reach $15.2 billion by 2018.
Another article points out that 3D printers can use up to ten different materials simultaneously. The printer can scan the geometries of all the necessary components of a complex item and use that information to print other objects around them. Rather than shopping for the right size case to fit your expensive tablet, it's now possible to have a case printed directly onto it.
All modern water systems utilize valves, and researchers are working on using 3D technology to create new types of valves. Traditionally, precision valves that regulate the flow of not just water, but oil and other liquid substances, have been made through a careful process of first creating a pattern, or "cast" of wood or plastic. 3D printing allows unique valve designs to be created and cast more quickly and inexpensively. Surprisingly, it has also paved the way for the development of temperature sensitive smart valves.
A scientific paper outlines the details of a new ink that can print thermally actuating hydrogels to create a smart valve using a network of alginate and poly N-isopropyl acrylamide. Thermally actuating means that the ink interacts with the environment and responds differently to different temperatures. Experiments have shown that the gels increased in length by over 40% when exposed to heat and then cooled. Using this information, they developed a smart valve that reduces the flow of water by 99% through exposure to heat and increases it with exposure to cold.
Experts predict that 3D printing will make it possible to create fully functioning human organs within the next five years. This is wonderful news for the thousands of people on waiting lists for transplants and their loved ones. They also predict that 3D printers will one day become as popular as home computers, which could result in the same degree of rapid innovation as people transform their ideas into physical realities.
Since powerful new technology creates the potential for abuse of that power, experts also point out the necessity for regulation of the industry. For example, 3D printed firearms may one day be used to commit crimes. Other legal considerations include the effects of 3D printing on current copyright and intellectual property laws. The real challenge lies in achieving a balance between public safety and the rapid innovation that has produced inventions that, ten years ago, would have been considered miraculous. One thing is certain—3D printing will make the future more interesting.
Modified on by pentago
Project management thoughts
In order to remain competitive, many small companies have had to upgrade their payroll systems to a localized self service model that is much less expensive than the now soon-to-be antiquated centralized payroll system that many enterprise level companies still use.
The innovation has yet to fully hit the business mainstream; however, it is more than accepted as legitimate by the companies that have the leverage to change on a dime.
In order to implement such a system without causing an operations bottleneck that will affect employees who are expecting a paycheck, a variety of project management skills must be implemented. Although the process is to decentralize payroll, the process itself must usually be centralized around a project manager with a certain skill set.
Of the companies on record that have successfully been able to make the switch, there are many technologies that are also in place before any big moves are made. Here are just a few of the ways in which a company can use its tech and human project management resources in tandem to decentralize its payroll system.
Finding a Good BPO Provider
The secret to success in a widespread endeavor such as payroll manipulation relies on the proper outsourcing of certain aspects of the procedure. A good business partner outsourcing choice is essential to minimizing the internal human resources that are used in the change.
When AstraZeneca chose to change its entire global payroll system, it chose Northgate Arinso because of the ability of the latter company to navigate the various cultural, political and technological challenges of the many countries through which AstraZeneca would be moving its HR functions.
This is not a direct endorsement for the services of Northgate; you may not need an internationally connected company to accomplish your payroll decentralization. However, the reasoning behind the partnership is worth noting for any situation.
AstraZeneca chose Northgate because of the potential for collaboration. The data migration resources that AstraZeneca brought to the table were leveraged by the Northgate IT team as an asset to leverage the branches of AstraZeneca that had not communicated with each other for years because of automation resources making up for time lag.
The collaboration between the two companies was able to re-energize the personal efforts fo the entire AstraZeneca team without overworking any of the employees at any branch. Daily operations happened without a hitch while a minimal internal staff worked on the payroll changes backed up by Northgate specialists.
One of the most important aspects of changing payroll on this level was the fact that it was driven from both the human resources and the finance department. One might think that this would cause an overload of opinion and potential conflicts of interest because of the sometimes opposing nature of these two branches of business.
Because of the personal “glue” that the Northgate specialists provided, however, the AstraZeneca team was not overwhelmed or pressured at any time. The delegation between the two departments became an asset rather than a power grab.
This had to do with the project management skills of a single individual with a penchant for delegation – Ana Calado. Ana spearheaded the effort from within AstraZeneca by attaching herself to the Northgate team en masse through specially appointed delegates.
She made sure that all of them were on the same page politically and operationally before deploying them with orders to lead the Northgate specialists in a consolidated data migration effort that would deploy a centralized system into branches across the world with the consistency of a McDonalds (not the consistency of the burger, the consistency of operations).
One of the technologies that Ana relied upon frequently was an automated payroll software solution that managed the tiered access structure of the AstraZeneca payroll logs. She stated that the process would have gone even more smoothly if she had access to a newer technology (such as the Xero integrated Deputy time tracker, which I actually used on several occasions) that far outpace the software that she was using. The more updated the access system, the less time that is lost trying to verify the role that everyone is playing in the process.
This is especially important when you have two companies involved, one of whom needs access to certain files and logs without being able to access other records. More time was spent making sure that Northgate employees stayed out of certain areas of the AstraZeneca paylogs than it was spent giving them access to the proper channels.
Even with this setback, Ana put the wheels of collaborative project management in motion in a way that is not often seen in a single company, much less between two companies. She states in interviews that the reason that she was able to overcome technological shortcomings was because of the unity of purpose that she gave to all teams before sending them out to accomplish their mission in the best way they saw fit.
She gave them enough room to solve their own problems while the final goal was set by a centralized source, which is one of the finest examples of the use of human resources in the modern business world. Think of how easy it would be for you with the proper project management software taking her example as a lead. Get the right technology and give the right message to your team – success will follow soon after.
There is no doubt that technology has changed a lot in the way we do most of our day to day tasks. Learning has not been spared and in an effort to elevate our learning experience to even greater heights we may have encountered many new gadgets and software platforms which boast of cutting edge technologies and advanced tools to deliver excellent results. The outcome is that we now have many eLearning platforms which many people do find to be very interesting. Streamlining of learning goals and objectives have now been made easier as it is now very simple to obtain course learning blocks which we could only imagine about a decade ago. Taking into consideration that technology innovation never seems to be having any kind of break, the year 2016 has presented us with many eLearning technologies and we find the following 5 picks to be really unique.
Cloud Platforms for Easy Access and Storage of ELearning Materials
Cloud platforms do provide a perfect environment for any eLearning experience. Nowadays many people do prefer to move most of their digital data into cloud and it is usually for a good reason. Cloud-based eLearning tools are highly favoured in that they enable for remote access of the given data which might be of interest. Besides this, it is also possible to form highly effective collaborations with other members who may be sharing a similar eLearning cloud-based platform and this helps to add more to the general learning experience. The other advantage is that many organizations now prefer to have most of their digital content stored in the cloud-based tools and the benefit that arises from this kind of move is that we can see massive cost-savings and productivity boost when it comes to online training and learning.
Wearable Tech Gadgets
Wearable technologies have found most of their use in aspects such as monitoring health fitness and gaming technologies. The eLearning is one platform which can also greatly benefit from the unique advantages those wearable technologies offers. For instance, wearable tech gadgets like smart watches allows for quick online access of eLearning resources like training modules, interactive modules, or access to an online scenario as dictated by the eLearning platform being used. Besides this, wearable technologies make it possible for the eLearning resources to be moved to wherever a person may be. This way it is possible for an individual to gain the relevant skills and training without being confined to a particular location. It is also possible to narrow down any eLearning experience to a specific geographical location and this ensures that those who have enrolled to a particular eLearning resource have the chance of receiving materials which are culturally and socially appropriate within a given location.
Virtual Reality Headsets and Glasses
Virtual Reality has been a hot topic for discussion especially in the gaming industry. We have had many companies giving out test trails for virtual reality gadgets and the immersion concept that these gadgets are operating on has been likened to something out of many sci-fi movies. This concept has however transformed and many people see the opportunity of it being used for applications like creating complex engineering designs or simulating real-life experiences which would otherwise prove to be dangerous if tested directly. The VR concept can make eLearning to be even more interesting as teaching presentations made using this concept usually make learners to be even more appreciative of what is being delivered.
Automated Development Platforms
ELearning does require significant investment both in terms of times and money especially of the learning objective is to have some element of interaction. In 2016, we have seen many automated development platforms being developed and most of these are offering pre-simulated templates, graphics and interactions which correspond to the learning topics of interest. This aspect helps in cutting down the development time and creates an environment where any interaction process required for eLearning gets to be automated.
For best results any learning experience has to be reinforced with elements like discussion. This is one area where training telepresence has been of great help and this technology works by providing more of a social gathering for members who sharing a similar eLearning platform. Leraners from any part of the world can engage in online discussions and collaboratively engage with one another without having any kind of geographical limitation being imposed. Training telepresence makes use of high definition cameras, audio materials and simulated space which makes learners feel like they are actually sharing a common physical space. Some researchers have suggested that eLearning can get even better if training telepresence is combined with some elements of virtual reality. This will help create a highly immersive environment which can make any online discussions to be worthy and thus add more the overall objective of any eLearning.
Learning Management System (LMS)
An LMS offers a unique advantage in that it is able to meet the eLearning needs of many government and enterprise organisations. A learning management system like LearnFlex is an excellent platform when it comes to flexibility, adaptability, scalability and overall effectiveness.
Modified on by pentago
Installing the IBM DB2 database server in Linux is straightforward and relatively simple as it comes with a graphical installer image. The most important part of installing DB2 is preparing your system, which means installing hardware and software dependencies.
In Linux, a headless server must have a graphical environment to be able to run the DB2 setup wizard, so the X window system and a basic window manager, such as OpenBox, must be installed. After meeting the basic requirements, all you have to do is launch the installation wizard from the CD or ISO image.
Install the X Window System and a Window Manager
If you already have a Linux desktop environment, such as Gnome, Unity or KDE, you can skip this step and proceed to running the installer. If your server is only set up to run Apache from the command line, you must install Xorg and any other related packages from your package manager. Whether you use Yum, Apt or Pacman, simply enter the appropriate command to install X:
rpm install xorg xorg-server xorg-utils
apt-get install xorg xorg-server xorg-utils
pacman -S xorg xorg-server xorg-utils
Refer to your distribution's official repositories for the exact package names required to run X on your system. You also need to install a video driver; it only needs to be a simple, lightweight, open-source driver if you're only going to use it to install DB2. If you have a Debian or Ubuntu server, you can install all the required packages, including Xorg and a video driver, by running the following command:
apt-get install lxde
This meta-package installs the essential packages needed to log into a graphical session and run the DB2 installer, and it should only take up 50MB to 60MB of hard-disk space.
Run the DB2 Installation Wizard
After rebooting your computer and logging into a graphical session, insert the DB2 installation disk and mount it in your user's media directory. For example, enter the following commands at the Terminal prompt:
mount /dev/sr0 /run/media/username/DB2_INSTALLER
Alternatively, just open a file manager, such as Nautilus or PCManFM, and select the disc in the navigation sidebar. In a Terminal window, enter the following commands to unpack and run the installer:
gzip -d db2setup.tar.gz
tar xvf db2setup.tar
The graphical installer opens, and you can install DB2 by selecting Install a Product and then choosing the products you want to install from the disc.
Recover DB2 Database Files
Once you have DB2 installed on your computer, you can remove the graphical packages if you don't want to use them. DB2 runs entirely from the command line, and you can use a few simple commands to perform maintenance operations, such as backing up and restoring database files. To restrict usage to the system administrator, use the following command:
db2 quiesce db database-name immediate force connections
Substitute the name of your database for database-name in the command. To back up a database, use the following simple command:
db2 backup db database-name
The database is saved in your current directory. Later, you can restore the database with the following command:
db2 restore db database-name
This command automatically chooses the most recent database. If you would rather restore an earlier backup, include a time stamp with the restore command, as in the following example:
db2 restore db database-name taken at timestamp
Next, roll forward the database state to the end of the most recent log file using the following command:
db2 rollforward db database-name to isotime using local time and stop
After issuing these commands, your DB2 server is ready to be used. It contains the restored database contents with the most up-to-date log file.
Modified on by pentago
image credit: Printerzone
Depending on the kind of printer you have purchased, it is possible to get a printer that is Linux ready from the box and will thus just fit in with your Operating system.
On the other hand it is also possible to get a printer which is not supported out of the box. In most instances, all you need is to install a driver for such a printer and voila! You are ready to print.
Because there are very many versions of Linux out there, covering all the printer configuration systems can be problematic.
To overcome this, there is a set up tool aptly called CUPS (Common UNIX Printer Service) that offers web-based, universal services found on all distributions that use CUPS for printing purposes.
What exactly is CUPS?
CUPS is basically a modular printer system which acts like a server printer for operating systems that are UNIX like. It can do this for both networked machines and stand-alone computers. CUPS consists of the following three (3) key systems:
Print Scheduler/Spooler which lines up printing jobs for the printer;
A Filter System which does the data conversion for the printer to format and understand the data being printed;
A back-end system that transports the data from the filters to the printer.
When CUPS is installed in the system it installs the following directories by default:
/var/spool/cups-pdf; this is the spooler directory where all the PDF files generated by CUPS are held for printing.
/var/spool/cups; this is another spooler directory where general print jobs are held before being printed.
/etc/cups; this is the configuration directory
In addition to the above, CUPS does also install it’s service in either of these two locations; /etc/rc.d/init.d OR /etc/init.d/ .
Depending on the location or distribution used, you will type in the following command to start the service (Debian example):
To stop the binary you type the following:
To restart the binary you type the following:
Remember to change the location according to how the binary is saved in your machine.
How to Configure Your Printer
image credit: ESP
This configuration is done using an integrated CUPS web based tool and the walk-through is for setting up a remote printer. This is because the process for a remote printer is slightly more complicated and will thus offer a good opportunity to learn the installation and set up procedures. Also, for those not so adventurous, you can always hire a print management professional/company to do it for you if you’re in corporate environment.
The main intention of going through the set up process is to allow UNIX to create what is known as a Postscript Printer Description (ppd) file.
This file usually contains all the features of the printer in question and the Postscript code that will be used to invoke the necessary features for print jobs of that particular printer
To configure the printer using the above mentioned web based CUPS tool, you must open your web browser and go the main page of the CUPS tool at http://localhost:631
From here one should follow these steps to set up the printer:
Step 1 – Click The “Add Printer” Button
This button is on the main page. In addition to this button, there is another one named “Manage Printers“ button; this one comes in handy if you have more than one printer, it allows you to manage all the printers which have already been installed.
Step 2 – Key In The Name, Location And Description Of The Printer You Are Setting Up
There are certain conditions you must fulfill in this page. When you are typing in the name of the printer, make sure it does not include a SPACE, “#” (hash tag) or “/” (backslash). So the name should appear as one continuous word.
The location should just state where the printer is located such as Lab 2 or Lab 4. You can use any human readable characters.
The description should be a human readable description of your printer such as HP Laser jet 6781 and can include spaces.
Once you have filled in the three input boxes you should click “continue“.
Step 3 – Select The Device From The List
At this step, you are expected to select the URI of your device. In most instances it is either remote or local. If you have are installing a local printer it will be listed in the drop down box and so all you need to do is just to select it.
If the printer is remote, select the Internet Printing Protocol as the URI of the device.
Click the “continue“ button to go to the next step.
Step 4 – Enter The URI Which Will Instruct All The Back-ends To The Exact Place Where The Printer Is Located
Because as earlier stated we are configuring a remote printer, you must enter the address of the printer. The address can take various formats which have been displayed in the in the window.
So if for the purpose of this walkthrough the printer is located in the /printers/Spool and is aptly called LaserJet then we will type in something like this; ipp:// 192.160.0.000/printers/LaserJet. It will be following this format (ipp: // hostname/ipp/).
If you are connecting to a printer server, you will have to know this information beforehand. Make sure you include the ipp:// section as failure to do so will make it impossible to connect your machine and the remote printer.
Step 5 and 6 – Select The Printer’s Manufacturer And Model
When selecting your model, make sure you get the correct one as there may be different models for different languages. If you don’t find your model then you will have to install a driver for it.
You can use Google or any other search engine to help you get a compatible or proprietary driver that’s suitable. To install the driver you have found just go back to synaptic, search for the name of the driver package and then install the application.
Once you are through, click the “Add printer button“ to add your configured printer. In most instances you will be required to key in your username and password before the installation is effected or completed.
Step 7 – Configure Any General Settings For The Printer
After your authentication has succeeded a new page will appear. This page allows you to make any other additional or specific needs you may have for your computer such as what to do when the printer jams, the kind of error or operation policy you want the printer to apply and the power save period amongst other things.
Once you are through with all the above, you are ready to print from a remote printer.
While many people have started ditching landlines to switch to cellular phones, many houses and most businesses still use regular landline phones. There are several good reasons to have them around -- they tend to be more reliable, often offer better voice quality, and for emergency services a landline provides your location immediately and reliably.
For most people this means having a standalone phone around, which is usually cordless. Why not? It's more convenient than a corded phone that ties you down.
However, cordless phones use radio waves to transmit the signal, usually in the same 2.4Ghz band as Wi-Fi. This connection to the base station "over the air" means that, as with Wi-Fi, someone who's sitting outside your home or office could listen in on what you're saying.
Wi-Fi allows you to encrypt your signal so even if someone intercepts your signal they won't be able to understand it. Can you do the same with your cordless phone?
Yes and no. Cordless phones can be encrypted so most people will find it very hard to snoop on your conversation. However, you can't add it to your phone if it doesn't have this feature already -- you'll have to buy a new phone.
The reason is that older cordless phones transmit information between handset and base station using an analog signal. This means that any snooper with a radio receiver tuned to the right frequency who can get close enough to your property to receive the signal can listen in.
This is why cordless phones shifted to digital signals and a "Digital Spread Spectrum" technology to shift frequencies rapidly and make it hard to intercept. (Digital Spread Spectrum, or DSS, was co-invented by the actress Hedy Lamarr in the middle of World War II. You can read more about this here)
The newest and most secure phones are called DECT phones because they use an advanced form of this DSS, called the Digital Enhanced Cordless Telecommunications standard, adding encryption and transmitting on 1.9 Ghz instead of 2.4Ghz so the phone won't interfere with Wi-Fi or other cordless phones. They're often labeled as "Wi-Fi Friendly" for this reason.
If you're wondering what kind of phone yours is, check your manual. If there's no DECT or DSS mentioned anywhere, yours is an analog phone.
Having a DSS or DECT phone will make it much harder for someone to snoop on you... but of course "harder" is far from "impossible." hackers have been able to crack DECT encryption for some time, but it requires advanced technical knowledge, high-end radio equipment, and specialized software -- most people aren't going to go to that kind of trouble.
Normally if someone is willing to spend that much effort to eavesdrop on you, it's because you're a highly important target -- a key person in a big company or a high-ranking government official. IN that case, you'll probably be using other security measures.
If you have a DECT phone, in other words, you can be fairly confident none of your neighbors are eavesdropping. If you're still worried, you can just use a corded phone -- but keep in mind it's always possible to add a phone tap directly on your phone line.
Modified on by pentago
Regardless of your company's size or the industry you operate in, you can't get around the fact that a robust communication and collaboration platform is needed. Many businesses use project management interfaces to complement basic messaging and calling apps (Skype obviously being the most popular). However, an alternative called Slack has managed to triple its user base within the past year, reaching an impressive 2.7 million daily active users (DAU) practically at the speed of light. Surely there must be some tangible advantages that are sparking such a rapid growth rate? If you're wondering how Slack is better than Skype, or vice versa, here's a brief comparison of the two:
Cost and Functionality
In terms of cost, Skype is hard to beat, being that it is always free to use as a messaging app and only charges for voice calling credits. Slack also has a free version, but it has some limitations feature-wise, whereas the free version of Skype is fully functional other than not being able to make calls. With that said, Slack can do a lot more than Skype altogether, which is probably why about 800,000 of its users have opted for the paid version. In terms of simplicity and learning curve, most people are more familiar with Skype's interface, but reading a simple guide on how to use Slack should bring anyone up to par quickly.
Compatibility and Integration
Slack wins in this department hands down, with the ability to allow other software to post messages to its interface. You can even create custom integrations to link the software with any program you'd like. In terms of bringing everything together under one roof, Slack takes the cake, as it even integrates with Skype and Google Hangouts for voice calling functionality. Of course, this year Slack added built-in voice chat and video calls, so you no longer need Skype integration to make voice calls.
Interface and Features
There's no question that Slack has a larger feature set than Skype – another reason why it is now being compared to such an established app. The bottom line is, you can do more with Slack than you can with Skype, as the latter is purely a chat/calling app, whereas Slack is a complete communication, storage, and collaboration solution that integrates with all of your major services like Google Drive, Dropbox, GitHub, Trello and more.
Ultimately, it all depends on what you're looking for and how much your business needs to operate efficiently. If you're just looking for a basic messaging and calling app and have no need for additional functionality beyond that, you can always start with Skype as a preliminary solution for your business, and then as your needs expand try a more inclusive, overarching approach with Slack. Fortunately, making the transition from Skype to Slack is typically easy thanks to streamlined integration that basically lets you use Skype features within Slack. Which one do you prefer?