These days Xiaomi, this hyped-up China phone maker, held the first on the net sales with the new Xiaomi Mi3 Smartphone along with the smart MITV, and the two devices had sold outs in merely over 60 seconds or so.

Think Email Is Dead Outside Of Work?

A 2012 Harvard Business School study, “E-Mail: Not Dead, Evolving,” found “communication between individuals—the original intent of e-mail—isn’t even listed in the top five activities” of how we use email today.


I have worked in the world of technology since 1982 and even worked as the vice president of an email services company. I tend to lean pretty heavily on email in my work world, but I have noticed how its use is changing in my personal life.


My most technologically literate friend, Stephen, and I often communicate by Twitter. Some friends who used to send me emails now mostly communicate with me by comments on my Facebook feed. Some even presume that I might stoop to reading Facebook email which I listed as one of the ten things that the tech industry should fix.


I have found that my thirty-something friends and family prefer to text me on my smart-phone. I am okay with that since I found the MightyText app that lets me send and receive text messages from my Google Chrome browser and on my tablet.


My personal reality seemed to be shaping up to a handful of people outside of work who still communicate with me by email. Even some of them are only responding to emails that I send. The golden age of personal email seemed to be receding into the mists of time.


It is different in the business world, where stats show that 48% of consumers prefer email as the communication with their brands. That explains why I have spent the last week trying to decide between Constant Contact and MailChimp as email marketing platforms.


Can Email Be The Great Equalizer?


About six months ago, two things happened to change the dynamic that emails are dying as a form of communication in my personal life. First I got elected to the board of directors of our homeowners association (HOA). Second, our minister decided that communication between the committees led by the elders of the church would go go electronic.


One of the reasons I got elected to the HOA board was the hope that I would create an online calendar and perhaps establish email communication between the board and homeowners. I did end up doing all of that but it turned out to be the easy part of the volunteer job.


At the church, I was already in charge of our website, and the communications committee.


Together, these two events gave me a completely new perspective and perhaps a hope that email for communication between people outside of business still has some life even if it will not be as glamorous as the earlier days of email.


Most of us in the technology world work in environments where we share files on a regular basis. At WideOpen Networks, my day job, we use Skype, Dropbox, and Highrise to share a lot of Pages files. When I am writing an article for ReadWrite, I often write the article in Google Docs and I can usually just attach the file directly through Trello, the content management solution which we use or upload a rich text format (RTF) document to a Trello card.


In both work cases, I am dealing with folks who understand files and things like Dropbox, Box, Google Drive, and Skydrive. If there is a problem, it usually can easily be solved by sending someone a RTF document.


Life is not nearly simple when you start trying to share files with people of varying ages and technology skills.


The Challenges Of Email And File Sharing


When I started sending my files to other church elders, I thought the easiest and most foolproof thing would be to share a document and send the sharing notice with the content of the report pasted into the email. To be blunt, that was a disaster. Some complained that they could not even open my email. It left me wondering how that could be.


The mystery started to clear when it occurred to me that a lot of people have become occasional email users and they are accessing their email on everything from a browser to get to ISP-provided email to iPads and smartphones with a variety of email clients—some of which an email snob like me considers pretty shaky.


One of my preferred technologies is IMAP email and preferably IMAP on a server in the cloud that I manage or one that is managed by people who actually know what they are doing and are focused on getting my email from me to the people I want to contact. While I use Gmail (IMAP version of course) for personal email, it is not my choice for business email.


I am not a big fan of webmail portals, which I considered are at best a necessary evil when a hotel’s Internet service blocks a port and makes it impossible to use client-based email.


When I started looking at the email providers used by some of the people with whom I was trying to communicate, I knew that attachments were likely going to be problems.


One Man's Battle With Attachments


Recently, I found out just how much of a problem attachments can be even in a very small group. At our most recent HOA board meeting, I ended up being the secretary when Anne, our very competent secretary, had to take one of her children to the doctor.


I managed to scribble down some notes and took Anne’s advice and typed them up that same evening while things were still fresh in my mind. I actually tried typing them up in Pages 5, since I was writing my Why Less Might Be More In Pages 5 article. I had some trouble getting the bullet numbering right so I moved it to Google Docs and actually sent her a Word docx file. There were a few details that needed to be added a little later before the minutes were finalized.


A few days later she sent out the completed minutes. I had no trouble viewing the file she sent but I did notice that somehow the file extension had been stripped. I added a .RTF to it and opened it file in Word, but strangely it would not open in Nisus Writer Express or Page 5. I chalked that up to stuff that just happens in the computer world.


We were already having more than a little trouble getting everyone’s approval on the emailed minutes attachment before printing and being mailed out. When we did not hear from the other two board members regarding the attachment, I sent an email to Anne and said that since she was out of town I would print the minutes and take them to the other board members. I did that at noon the next day.


At the first board member’s house, I was told they had two computers and one computer seemed to be eating all the emails before the other one could read them. Following my rule of never getting involved in solving a technology problem unless the person is a blood relative, I did not bring up the subject that their email was likely POP and the first computer was likely removing the email from the server. I handed them the printed copy and just made sure the board member was happy with it.


At the second and last house, the wife of the HOA’s president took the printed copy and said she would deliver them to her husband when they met for lunch later that afternoon.


I did not think anything more of our problems until the president of the HOA showed up at my door that same Saturday afternoon. While he had gotten the printed copy of the file that I delivered, he wanted to know why he could not open the attachment sent by our secretary. He had tried unsuccessfully on his Android tablet and Android smartphone.


It took me a minute to remember the missing file extension on the attachment and a lot longer to find a free app, OfficeSuite, to install on his smartphone. Just to be perverse, before I forward him the file again, I added a .docx extension to the original file the secretary had sent. I tested the file on Mobile Office 365 on my smartphone before opening it without any problem on his smartphone using OfficeSuite. He left happy that he could read the minutes. I did not spoil the good feeling by telling him a program could easily strip the extension again the next time the minutes are sent.


Lessons Learned


All of this is far more complex than it needs to be. Holding classes on how to collaborate with others using electronic devices is beyond what I want to tackle in an area that I love but which gets most of its time sensitive communications from hand-lettered bed sheets on posts at the intersection of the main highways instead of through Twitter.


It turn out that email is the solution. You just have to keep it very simple. If you have to share something with people with whom you do not work, do not do attachments. Just copy the text of your report and paste it as plain text into an email.


Do not even dream of trying to get a diverse group of people using Google Drive or Dropbox, just be smart and revert to the simplest email that you can use. Follow my recommendations and use plain text email and cross your fingers. At our church, which is a larger group, I just quit doing reports. It makes life a lot easier.


Image courtesy of Shutterstock.






from ReadWrite http://readwrite.com/2013/11/29/think-email-is-dead-outside-of-work

via

Tablets: The Tech Guru Gift Guide


ReadWrite Shop is an occasional series about the intersection of technology and commerce.


Brace yourself: Tech enthusiasts across the land are about to get swamped by friends and family members begging for help on gadget gifts. Yes, that means you.


In the past, choosing a tablet often hinged on tech specs and app selection. But the gap in hardware performance has narrowed quite a bit, and app stores for Apple and Android devices are much more alike than they used to be. (Most of the top iPad apps are available on Android at this point.) So how to choose?


To steer your relatives through these shoals, turn to our handy tablet gift-guide flowchart and revel in the admiration and awe you'll receive in return.





Lead image via Flickr user ebayink, CC 2.0






from ReadWrite http://readwrite.com/2013/11/28/tablet-buying-guide

via

Terremark Gets Surgically Removed From HealthCare.gov

The flailing convulsions that make up the launch and subsequent recovery of the Affordable Care Act's HealthCare.gov web site is still managing to inflect damage on vendors who had a hand in setting it up in the first place.


Next up: Verizon Terremark, which was the web-hosting provider for the online marketplace, has been given the boot by the Department of Health and Human Services. HHS opted not to renew its contract with Terremark, and instead awarded the winning bid to Hewlett-Packard, the Wall Street Journal reported.


HP's Enterprise Services group put in the winning $38 million bid to start taking over the web hosting duties this summer.



Anyone following the HealthCare.gov debacle will not be terribly surprised by Terremark's fallen status. Health and Human Services Secretary Kathleen Sebelius threw Terremark under the bus in her Oct. 29 testimony to Congress when asked about a recent failure on the site. Sebelius pinned the blame squarely on Terremark.


A couple of things leap out at me about this move. The first is quite cynical: I hope HP can survive their encounter with HealthCare.gov. The second is more of an observations on mixing technology with politics: it rarely works.


The issue here is that in their quest to figure out what went wrong with HealthCare.gov, politicians are really seeking to figure out who to blame—and ensure that blame does not fall on them.


Verizon Terremark may have indeed dropped the ball on HealthCare.gov, but they are hardly the only ones. Pushing them out now may be necessary—none of us are fully privy to the mess that's been made—but it seems wildly counterproductive to lose one web-hosting provider and force a transition to another one at a time when many other things need to be fixed.


It's like asking a patient to be moved to another hospital, while he's in the middle of open-heart surgery.


Image courtesy of Shutterstock.






from ReadWrite http://readwrite.com/2013/11/28/terremark-removed-from-healthcaregov

via

Fiber-Optic Networks May Be NSA's Back Door Into Secure Data Centers

Data centers are regarded as the Fort Knoxes of the digital age: heavily guarded and impregnable to any intruder that tries to get their hands on the data within.


So how does a government agency like the NSA manage get a hold of data from the likes of Google and Yahoo? Easy. Instead of taking the gold from Fort Knox, the intelligence agency may be hijacking the data on the road, an Internet highwayman that preys on the one vulnerability every data center has: data has to go somewhere eventually.


The "road," in this case, are the fiber-optic cables that comprise the backbone of the Internet. According to the New York Times, Google and Yahoo are increasingly suspicious that Level 3 Communications, which provides the Internet cables for the two Internet service vendors, is allowing the NSA to grab data in transit between data centers.



...[O]n Level 3’s fiber-optic cables that connected those massive computer farms—information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.



Level 3 isn't the only company that runs the fiber-optic cables: companies like BT Group, Verizon Communications and Vodafone Group are in this category as well. It is not known for sure whether any of these companies are actually providing access to intra-data center communications, but given the NSA use of secret warrants with attached gag orders to subpoena data directly from data center providers, it's not seem all that far of a leap to think that the agency is doing the same for the network vendors.


The lesson here for all of us who use the Internet? If the government really wants your data, it's going to get it, one way or another.


Image courtesy of Shutterstock.






from ReadWrite http://readwrite.com/2013/11/26/fiber-optic-networks-nsa-back-door-secure-data-centers

via

Apple Busts A Move With PrimeSense Acquisition

Israeli start-up PrimeSense is the latest acquisition for Apple, which has picked up the 3D sensor technology vendor for $350 million.


PrimeSense's technology is one of the key elements of the sensing tech used within Microsoft's Xbox consoles. But it is doubtful that this is a move to try and hobble Microsoft's gaming efforts. Microsoft no doubt has an iron-clad licensing agreement for PrimeSense's contributions to the Kinect system, and a lot of Kinect comes from efforts directly within Microsoft Research.


The acquisition is much more about Apple, which could incorporate the capability to sense body movements into devices like the iPad and Macbook, and perhaps the Bigfoot-like iTV we keep hearing about in "exclusive leaks" to the media.


Control of devices by movement would be a cool addition to the Apple lineup, and we're looking forward to seeing it soon.






from ReadWrite http://readwrite.com/2013/11/25/apple-primesense-acquisition

via

Glass Explorer Contest Winner Takes Aim At Alzheimer's

As Google Glass 2 makes its way to the members of the Explorer program, ReadWrite is happy knowing that one of the newest explorers is a ReadWrite community member.


Earlier this month, ReadWrite held a contest that awarded an invitation to the Glass Explorer program, based on the entry that would have the best potential to do the most social good.


Of all of the qualifying entries, the one that impressed our panel of judges the most came from Nasr Mobin of San Diego, CA.


Mobin's entry was simple and profound. He wants to create an app



...[F]or people who are experiencing Alzheimer's. Not only it will help them with memories they have forgotten (people's face and names and properties, work they have done in a day and other days in past, etc.) that don't have a solid memory [of], it also could provide brain practices at different times through out the day automatically (recommended by their doctor). Eventually it could become their personal memory assistant.



Mobin's idea has merit; a memory assistance device could be an invaluable aid to those suffering from Alzheimer's, and their families.


Mobin has reported he has already received his invitation and placed the order for his own Glass device. We are looking forward to hearing from Mobin and find out how his project is progressing.






from ReadWrite http://readwrite.com/2013/11/22/glass-explorer-contest-winner-takes-aim-at-alzheimers

via

Google Patents Tech That Could Take The 'Social' Out Of Social Networking




Google has patented plans for software that learns how you behave on social networks and can automatically generate suggestions for "personalized" reactions to tweets and Facebook posts.


Originally noted by the BBC, the ostensible goal of the software is to help users keep up and reply to all the interactions they receive, especially critical ones. However, technology like this could be counterproductive; the whole point of social media is to, well, be social, after all.






from ReadWrite http://readwrite.com/2013/11/22/google-social-robot

via

Windows Phone Users Can Finally Experience The Joys Of Instagram




Today Instagram is finally available for the Windows Phone-with just one tiny flaw. You can't actually capture videos in the app, and capturing a photo is complicated.


Though initial reports said that users couldn't capture photos with the Instagram app, our own Dan Rowinski reports that it works just fine-it just takes you to the camera roll first. The app brings users to the external Windows Phone native camera as opposed to an Instagram camera, but most people won't notice as it brings you straight back to the app with the photo you have taken ready to crop and add filters to. Video capturing isn't currently available.


The Instagram app available through the Windows Phone Marketplace claims to be a beta of the app so it is likely that the Facebook-owned photo-sharing platform will issue updates to the app soon.


Instagram wanted to release an app as quickly as possible, so it focused on Instagram's core features and will continue to develop the product to bring additional features in the future.


"Most people upload photos from their camera roll, so with the beta version of the Windows Phone, we're starting with the experience most people already use," a spokesperson for Instagram said.


Instagram had been reluctant to build an app for Windows Phone, but Nokia announced last month that an Instagram app was in fact in the works. Nokia's Lumia 1020 is arguably the best smartphone camera on the market, so it makes sense that the company would want the premier photo sharing service on its phones.


It seems as though Instagram for Windows Phone is optimized specifically for the high-quality images the smartphone cameras produce, or so the manager of the Windows Phone team Joe Belfiore tweeted earlier today.









from ReadWrite http://readwrite.com/2013/11/20/instagram-introduces-app-for-windows-phone

via

Enterprise Data Needs Still High On The Pain List

A lot of people look askance at the idea of big data, wondering if it is more hype than substance. But consider this: Microsoft's most successful corporate acquisition to date is a company that does nothing but manage the huge growth of data for customers.


The data pressure, whether you buy into the hype or not, is most definitely on for companies, and StorSimple is one of the vendors seeking to alleviate that pressure.


This week marks the one-year anniversary since the crew in Redmond formally acquired StorSimple, and because of the phenomenal success the StorSimple acquisition has had, Microsoft is taking a little time to celebrate.


StorSimple, which was founded in 2010, is a hardware storage gateway vendor that specializes in tiered storage. Tiered storage is a way to organize storage needs in location or needs priorities. One common example of tiered-storage use is to store lesser-used archival "cold" data in a public cloud, while keeping more-used "hot" data stored locally for faster access.


Because the tiers are transparent to end users, the distribution of data is seamless. If the resources on the public cloud are large enough or use good, elastic policies to expand on demand, then users essentially get "a bottomless file server," according to Microsoft Corporate Vice President Brad Anderson.


Anderson, who oversees Microsoft's Windows Server and System Center products, isn't what you would call subdued about StorSimple's success to date.


"In the six months prior to the Microsoft acquisition, versus the six months post-acquisition, StorSimple did seven times the business," Anderson related in a recent interview.


Or put another way: in those six months after the acquisition, StorSimple was hitting numbers it wasn't expecting to hit until three years after it was bought.


Data Is Life


Numbers aren't usually a focus of the story, but they are important to mention in this instance, because they are a solid piece of evidence that suggests there is more to this data explosion than mere hype.


Anderson said that many of their customers are seeing data growth rates of 40-50% per year, and that storage is the fastest growing line item for data center budgets. That feels pretty market-y, but it does match the ongoing conversations people have been having in the enterprise about data management and storage.


It is easy to see "data" and start thinking "big data"—with all the attendant analytics, application development and magic pixie-dust that big data hype will usually bring. But even regular data needs—the kind that just needs to be stored for business, backup or disaster recovery reasons—must be managed.


And, ideally, without breaking the budget. Anderson highlighted the City of Palo Alto, Calif., which jumped to StorSimple after reviewing Storage Area Network options that would have run the city $250,000. The bill for StorSimple was closer to $60,000.


Cost is a big driver for StorSimple customers, and so is flexibility. The seamless approach described earlier is a boon not only to end users but to their IT managers as well. It also gets them exposure to using a public cloud-based platform.


That doesn't bother Anderson: 50% of new StorSimple customers are using Windows Azure for the first time. Data storage in the cloud could be the gateway drug for enterprises and small- to medium-sized businesses to more cloud computing use later.


As entire ecosystems continue to grow around big data and data analytics, companies are still very much seeking the less-flashy but still critically important, tools to manage everyday data. Businesses still have work, and now more than ever, even "ordinary" data is a company's life blood.


Image courtesy of Shutterstock.






from ReadWrite http://readwrite.com/2013/11/20/enterprise-data-needs-still-high-on-the-pain-list

via

Google And Microsoft Put Differences Aside To Fight Child Porn

Google Chairman Eric Schmidt is still assuring users and politicians in the UK that Google is working hard to combat the problem of child pornography—and he's also giving credit where credit is due to Microsoft.


In a Daily Mail article, Schmidt outlined the steps Google is taking to rid its search engine results of exploitive images of children. These steps include deterrence, by showing warnings crafted from Google and other charities that will pop up anytime enters a search term seeking such images.



Microsoft and Google are teaming up on the detection and removal steps. Once illicit images are correctly identified, they will be digitally tagged with a unique fingerprint.


"This enables our computers to identify those pictures whenever they appear on our systems. And Microsoft deserves a lot of credit for developing and sharing its picture detection technology," Schmidt wrote.


The third step in Google's plan is providing technical support to organizations such as Internet Watch Foundation in the U.K. and the U.S. National Center for Missing and Exploited Children.


All of these steps are positive moves forward in the fight against child pornography, though they are very much geared to what Google and Microsoft can actually do: get the illicit images off search networks, which is the first step towards eliminate them altogether.






from ReadWrite http://readwrite.com/2013/11/19/google-and-microsoft-put-differences-aside-to-fight-child-porn

via

Raspberry Pi Vaults Past 2 Million Sold Mark

It's a computer, but there's no monitor. Or fan, or keyboard, or even a case, for that matter. But the credit-card sized Rasperry Pi is still getting snapped up by consumers: less than two years after the first Pis shipped, over two million have been sold.


Raspberry Pi falls into a category of computing device known as a miniboard, where the bare components of a computer—processor, video interface, USB ports and memory are lashed together on what amounts to a circuit board.


But from such a simple device, many things can be created. By plugging in external storage, a monitor, and a keyboard, users can have a Linux computer running in minutes. Or build sophisticated electronic devices like a media stramer or an Internet radio.


The flexibility of Raspberry Pi is certainly an attractive feature. So, too, is the price. The two models of the Raspberry Pi cost $25 for the Model A and $35 for the Model B. Both models feature a 700-MHz ARM processor on a Broadcom system-on-a-chip board, with 256 MB of RAM and an SD/MMC/SDIO card slot for onboard storage. The big difference between the two models is that the extra $10 will get you a 10/100 Ethernet port and a second USB port in the Model B.


Bringing Code To The Masses


Two million devices sold is quite an achievement for a project that has its roots in trying to decrease computer illiteracy.


In 2006, team members in the University of Cambridge's Computer Laboratory in the United Kingdom noticed a sharp decline in computer skills in A Level students accepted into their program. Worse, it was a trend they could see being repeated in other nations besides the UK.


Despite the proliferation of personal computers, or perhaps because of it, kids were no longer playing around or experimenting with PCs. Instead, they were using apps as they were presented, or just buying and downloading new ones to do what they wanted. Hacking and coding, it seemed, was going out of style.


The Cambridge team, lead by designer Eben Upton, began to put together a small, portable, and very inexpensive device that would boot right into a programming environment. From there, a student of any age could start coding to their heart's content.


By 2008, the device now known as the Raspberry Pi had completed the design phase and was ready for production. The Raspberry Pi Foundation was founded that year, and after three years of fundraising and production, the Pi devices were rolling off of the assembly line in February 2012.


The team is stunned by the project's success, even as they work on improvments to the popular miniboard device.



We never thought we’d be where we are today when we started this journey: it’s down to you, our amazing community, and we’re very, very lucky to have you. Thanks!



Image courtesy of Wikimedia.






from ReadWrite http://readwrite.com/2013/11/19/raspberry-pi-vaults-past-2-million-sold-mark

via

Watch Where You Write

When you put time and creativity into sharing your thoughts online with an audience, do you care about fully owning where that experience takes place?


Increasingly, writers are turning to third party platforms to host their works instead of maintaining their own blogs. They're finding readers at places like Medium, Svbtle, Twitter, and Facebook. I'm one of them, but lately I've been wavering over where I want to put my thoughts down on digital paper.


The platforms mentioned above have built remarkable communities, but they are also double-edged swords. On one hand they can bring a massive audience to an unknown writer, and they're also much easier to maintain from a technical perspective. On the flip side, whoever is packaging your material for you can flip a switch at any time and change the context of your creative output and how your readers access it.


A few factors to consider when choosing to house your creative endeavors on a platform you don't own:



  1. You can only implement technology that the platform lets you. While it may be constantly evolving, from a technical perspective you don’t have full control over how you share media or which technical integrations you use.

  2. How you make money (and how much) is largely determined by rules you have no part in making. While those rules treat everyone (all contributors) equally, not everyone is an equal in terms of the talent or audience size they bring to the table.

  3. If a platform disappears tomorrow, or does something that you don’t like, you and your fans aren’t easy to migrate away. It’s not your platform, you’re just using it. This happened to me on Posterous. More on that below.


Now obviously many platforms provides a great service to content creators by offering free hosting for content, that’s huge, but they do it a cost. Typically that cost is using your audience to advertise to. Maybe it's today, maybe it's in the future.


Social networks and blog collectives serve a very important purpose, but they should not be seen as a canvas an artist feels required to paint on. Centralizing content in one place makes it easy for audiences to discover you, but it also turns you into just another shop in a mall, competing for attention along with with the smell from Cinnabon and the guy selling bedazzled phone cases.


So why have I been drawn to specific platforms?:


Medium


I was one of the first few hundred users of Medium through a bit of luck. I loved the traffic spikes that came with getting a story posted to the homepage. I also believe the CMS (content management system) is the most beautiful minimalist writing prompt I've ever used.


I've had a chance to interact with lots of people from the bottom to the top of the organization and they sought my advice and implemented some of it. I love what they're doing and if I'm going to cheer on any third party platform it's them. That said, although I briefly considered making Medium my primary venue for writing, I quickly switched gears because I just don't have enough control over how the site presents my work.


Svbtle


Much like Medium, Svbtle built a userbase by curating excellent writing from a notable group of people who generally have large social networks. I really enjoyed the content, but was quickly turned off by a lack of communication between the team running the site and the community they were trying to build.


There was a strange inhuman quality to Svbtle, and I don't feel like that's changed much since I first encountered it. Ultimately, if there's a race to see which new blogging platform "wins," I'm not planning on betting on the Svbtle horse. From my perspective it's just a simple platform that offers little more than a clean design for people who don't want to manage their own website.


Posterous


I had always hosted my own sites, but one day Posterous showed up and offered simple posting (via email) and a powerful community. In many ways it was a precursor to Medium and Svbtle. It was so unique that it actually spurred a friend and I to start a project called "the3six5," a public diary that let a different person write an entry every single day for an entire year.


That project ran for 1000 days in a row and we accumulated 365,000 words from people all over the world. And then Posterous was shut down. While the data was saved, the Internet shrine we had built was essentially bulldozed. The trust we had put in a third party platform burned me pretty bad here.


Twitter


As of writing this post, it's my six year anniversary on Twitter. I've tweeted 65,000 times averaging about 30 tweets per day. Besides the painfully depressing realization that I could have done so much more with my life than this, it's clear that Twitter has successfully convinced me to trust a third party platform with my words.


Perhaps the brevity that comes with 140 characters makes tweets seem less significant or upsetting to lose, but as of now, I've spent more time writing on Twitter than any digital venue I've ever touched. I will continue to trust Twitter with my work, but only because I can download it at any time.


In the last few years writers have definitely started migrating away from their own domains, but will simpler content management systems and an increased competency in web development swing things the other way?


Where do you prefer to write? Where do platforms like Tumblr and Squarespace fit?






from ReadWrite http://readwrite.com/2013/11/13/own-your-own-words

via

Google Glass Adds Music To Its Immersive Services

Surround sound used to be a marketing term for clear, rich music and entertainment delivery. But Google Glass is implementing new music search features that will not just sound that surrounds, but actually integrates it more completely into users' lives.


The new Glass music search service will feature music matching by listening to ambient sound, and music playback ... all controlled by voice. To avoid playback from bothering people nearby, Glass will also have earbuds through which to listen.






from ReadWrite http://readwrite.com/2013/11/12/google-glass-adds-music-to-its-immersive-services

via

Post Office Starts Sunday Delivery Service For Amazon

Even as the United States Postal Service wants to scale back on its normal delivery schedule, the mail service is picking up a little extra Sunday business on the side.


The USPS announced a new pilot program for the New York and Los Angeles metro areas that will deliver packages for Amazon.com on Sundays, starting immediately.


Sunday delivery for Amazon will bring the Postal Service a much-needed boost to its revenue, as well as benefit Amazon, which will get reliable and relatively inexpensive shipping service on the one day of the week mail carriers aren't pushing through rain, sleet or dark of night.


Though the expanded service will only be available in the NY and LA areas to start, it is expect the Sunday delivery service will expand next year to other locations in the U.S., such as Dallas, Houston, New Orleans and Phoenix.


Image courtesy Flickr/Akira Ohgaki via CC.






from ReadWrite http://readwrite.com/2013/11/11/sunday-delivery-service-starts-amazon-usps

via

State of the OS: Three Operating Systems, Three Upgrades

As a veteran of many operating system upgrades, I am usually somewhat cautious when it comes to system upgrades, but keeping my data in the cloud has perhaps made living on the wild side a little less dangerous.


I have two desktop computers, an I5 Mac Mini and Lenovo tower also powered with an I5 processor. In addition to OS X, the Mac Mini also runs Xubuntu Linux through VMware Fusion.


During the last ten days of October 2013, I did major upgrades on all three of my operating systems. Over the years I have seen lots strange things happen when doing a single operating system upgrade. I once did a Mac OS X upgrade and it took me a week to get my email to work again. I have done early Linux upgrades and had applications break beyond my ability to fix them. Linux upgrades caused me so many problems that I gave up on the operating system until I discovered Ubuntu.


I don’t have as much experience upgrading Windows systems since I typically have gotten my new operating systems by purchasing a new computer and passing on my old Windows machines to someone else. Still, I lived through the many upgrades to Vista, so I saw networking on my laptop break more than once.


Doing three major upgrades very close together is obviously inviting trouble. However, it is also a good way to measure if we are making any progress in the operating system world.


Enter The Penguin


For years, I lived by the mantra of a “clean install” when upgrading my Macs. This time I decided to go for broke and make the first upgrade on my virtual Linux system, pushing my Xubuntu install up to Saucy Salamander—aka Ubuntu 13.10—the underpinning of Xubuntu Linux.


To be very honest, my Linux upgrade happened behind the scenes with no intervention from me other than typing my administrative password and rebooting Linux. I am sure the Linux folks added a lot to the latest version and I have read the notes, but so far my undiscerning Linux eye hasn’t found anything which looks new. I mostly use the Firefox browser and Thunderbird email client on Linux. They both seem to work the same as they did before. LibreOffice has a few new features, including the ability to embed fonts in documents when sending them to someone else. It is a credit to the Linux folks that upgrading is now so painless. I am happy with my Linux world.


The Race Is On


My experiences upgrading to Windows 8.1 and OS X Mavericks were more interesting.


I had been forewarned that the download for Mavericks could be slow, so I started the download before I went to bed. The next morning when I came upstairs to my office, I found that I had a successful download and OS X Mavericks was ready to be installed. Being a little old school, I used DiskMaker X to make a bootable OS X Mavericks installer on an empty USB drive so I would not have to go through the download again in case there was a need.


Also, to make things more interesting, I queued up the Windows 8.1 download so I could start it at the same time OS X Mavericks started installing. Not that it really matters much, but it turned out Windows 8.1 downloaded and installed quicker than OS X Mavericks finished installing.


Surprisingly, the upgrades went smoothly for both the Lenovo desktop and the Mac Mini. Of course we all know that the fun begins once you start trying to do the same things that were once easily accomplished using your old operating system.


Checking Out Mavericks


One of my least favorite parts of operating system upgrades is having to buy upgraded applications that are broken by the operating system upgrade. Usually there is at least one, and it was not surprising that VMware, my virtual machine client, was the one that broke. Which meant that things were starting not to work well in Linux, though no fault of Xubuntu's.


I checked and found that there was a “new and improved” VMware version that was designed to work with OS X Mavericks. I paid the $49.95 upgrade fee, downloaded and installed the new version of VMware, and my Xubuntu experience was back to normal.


I did the OS X Mavericks upgrade hoping that the new OS would fix a printing problem that developed with Mountain Lion. I have three printers on a network and one of them was showing as available on the Mac, but when I tried to print to it, it would never connect. The same printer worked fine from my Windows computer on the same network. I tried reinstalling it a couple of times but I never could get it to work.


I was pleasantly surprised when I tried to print to the printer under Mavericks and it actually worked. Unfortunately a couple of days later, it quit working so I finally gave up and hooked it to the Mac using USB while having the Windows machines access the printer through Ethernet.


So far I have only had one crash on my Mac running Mavericks. It was the old version of Pages and it has not happened again.


Windows On The World


How did the Windows system upgrade fare? Actually, things seemed to go very well until I tried to upload some photos using the built-in SD slot on my Lenovo tower. The SD slot did not work.


I rebooted and it worked, but the next time I tried it, it would not work again. I plugged in an external SD reader and it seems to work fine. It is actually a little easier to reach that than the slot in the tower so I may just ignore the problem.


I did have another somewhat scary problem after I upgraded to Windows 8.1: when I tried to wake the system from sleep the next day, I got the message that my system was broken and needed to be taken to a dealer. I rebooted, the message went away, and so far the problem has not reappeared. My fingers remain crossed.


So far on Windows, all my applications are working and Windows 8.1 is still the same multi-personality OS that it was before. I use Start8, so I mostly ignore the new Windows 8 interface on my desktop machine. I do use the touch features on my Lenovo Yoga which I have not upgraded to Windows 8.1.


The Biggest Changes


Of all the changes in the three operating systems, the one that tried to change the way I work the most was the new default way that second screens are used on the Mac. My Mac desktop has two screens and after a few days I decided the new setting which gives each screen its own Space just would not work for me. I found the solution buried deep in the Mission Control preferences. There is a check box that lets me change back to the old way where a single Mac window could stretch across two screens.


I haven’t used the new iWork suite extensively, but with no support for linked text boxes, it is definitely not the same Pages. I am most impressed with iWorks in the Cloud. It seems to be a nice balance of speed and functionality. I even got it to work from a browser in Linux. I have tried opening a couple of RTF format documents with Word and iWorks. iWorks looks like it might be speedier. I have been told the formats of the new version of iWorks are not backward compatible with old versions but you can export the new versions to the old format.


All in all, congratulations should go out to the folks who have brought us these modern operating systems. My triple roll of the upgrade dice was definitely made on a hot table.






from ReadWrite http://readwrite.com/2013/11/11/state-of-the-os-three-operating-systems-three-upgrades

via

Stability In An Uncertain World: Adding A Nine To Your Cloud Platform Availability


This guest post from David Thompson, principal DevOps engineer at MuleSoft.



Nothing lasts forever. This is certainly true for infrastructure, and it's most poignantly obvious in the public cloud, where instances churn constantly. Your single-node MySQL service? Not long for this world, man. That god-like admin server where all your cron jobs and 'special tools' (hacks) live? It’s doomed, buddy, and it will take your whole application stack with it if you’re not careful.


One question that came up recently within the DevOps team here was: “Given the AWS EC2 service level agreement (SLA) of 99.95, how do we maintain an uptime of 99.99 for our customer applications?” It’s an interesting point, so let’s explore a few of the principles that we’ve learned from building a platform as a service to maintain higher availability than our IaaS provider.


Consider a simple-but-typical case, where you have three service components, each one having a 100% dependency on the next, so that it can’t run without it. It might look something like this:



You can calculate the expected availability of this system pretty easily, by taking the product of their individual availabilities. For instance, if each component is individually expected to hit three nines, then the expectation for the system is (.999 * .999 * .999) = .997, failing to meet a three-nine SLA.


Redundancy and Clustering: Never Run One Of Anything


In order to break into the high-availability space, it’s critical to run production services in a redundant configuration; generally, you should aim for at least n+1 redundancy, where n is the number of nodes needed to handle peak load for the service. This is a simplistic heuristic, though, and in reality your ‘+1’ should be based on factors like the size of your cluster, load and usage patterns, and the time it takes to spin up new instances. Not allowing enough slack can lead to a cascade failure, where the load spike from one failure causes another, and so on until the service is completely inoperable.



We typically run all of our edge (i.e., externally facing) services as stateless Web apps behind Elastic Load Balancers. This allows a lot of flexibility with regards to replacement of instances, deployment of hot fixes, and the other kinds of maintenance tasks that can get you into serious trouble when you’re running a SaaS solution. The edge services are backed by a combination of persistence solutions, including Amazon RDS and MongoDB, each of which provides its own redundancy, replication and failover strategy. Instances for both API and persistence services are distributed across multiple EC2 Availability Zones (AZ), to help prevent a single AZ failure from taking out an entire service.


Loose Coupling And Service-Oriented Architecture


If you decouple the services so that each one is able to function without the others, your expected availability improves, but it also becomes a lot more complicated to calculate because you need to consider what a partial failure means in terms of your SLA. An architecture like this will probably look a little messier:



The diagram above shows a typical case where you have several different services, all running autonomously but consuming each other's APIs. Each of these blocks represent a load balanced cluster of nodes, with the request blocking calls in red and the asynchronous processing in black.


One example of a service-oriented architecture (SOA) that might be structured like this is an eCommerce system, where the synchronous request consumes billing and inventory services, and the asynchronous processing is handling fulfillment and notifications. By adding the queue in the middle, you can decouple the critical calls for the purchase process; this means that S2 and S4 can have an interruption, and the customer experiences no degradation of service.


Since we’re running a platform as a service (PaaS), we have different SLA requirements for customer apps versus our platform API services. Where possible, we run customer apps semi-autonomously, maintaining a loose dependency between them and the platform services so that if there is a platform service outage, it doesn’t impact the more stringent SLA for customer applications.


TDI: Test Driven Infrastructure


Monitoring is really just testing for infrastructure, and like with application code, thinking about testing from the beginning pays huge dividends in systems architecture. There are typically three major categories of monitoring required for each service architecture: infrastructure, application stack and availability. Each one serves its own purpose, and together they provide good visibility into the current and historical behavior of the services and their components.


For our public cloud infrastructure, we’re using a combination of Zabbix and Pingdom to satisfy these different monitoring needs. Both are configured to trigger alerts using PagerDuty, a SaaS alerting service that handles on-call schedules, contact information and escalation plans.


Zabbix is a flexible, open source monitoring platform for operating system and network level metrics. It operates on a push basis, streaming metrics to collectors that aggregate them and provide storage, visualization and alerting. Also—and critically in a public cloud environment—Zabbix supports automatic host registration so that a new node can register with the aggregator with no manual intervention.


Pingdom looks at services from the opposite perspective, i.e., as a list of black boxes that it checks periodically for certain behaviors. If you have carefully defined your SLA in terms of your APIs and their behaviors, then you can create a set of Pingdom checks that will tell you factually whether your service is meeting its SLA, and even create reports based on the historical trends.



A PaaS also needs another layer of monitoring: internal platform monitoring. The platform checks the health of each running customer app on a periodic basis, and uses the AWS API to replace it automatically if something goes wrong. This makes it so that there is a minimal interruption of service even in the case of a catastrophic failure, because once the app stops responding it is soon restarted. Internal health checks like this are application specific and require a significant technical investment, but provide the best auto-healing and recovery capabilities because the application has the most context regarding expected behavior.


Configuration As Code


It’s also critical to know what your hosts are running at all times, and to be able to spin up new ones or update existing ones at a moment’s notice. This is where configuration management comes in. Configuration management lets you treat configuration as code, committed to GitHub and managed just like any other repo.


For configuration management, the DevOps team at MuleSoft uses SaltStack, a lightweight remote execution tool and file server written in Python and based on ZeroMQ that provides configuration management as an intrinsic feature. Combined with AWS' CloudFormation service for infrastructure provisioning, this creates a potent tool set that can spin up, configure and run entire platform environments in minutes. SaltStack also provides an excellent remote execution capacity, handy under normal circumstances, but critically valuable when trying to make a sweeping production modification to recover from a degradation of service.


As an aside, the combination of IPython, boto and the Salt Python module provides an amazing interactive CLI for managing an entire AWS account from top to bottom. More about that in a future article.


Low-Risk Deployment


It’s probably painfully obvious to anyone in the software industry, and especially to anyone in DevOps, that the biggest risks and the biggest rewards are always packaged together, and they always come with change. In order to maintain a high rate of change and high availability at the same time, it’s critical to have tools that protect you from the negative consequences of a botched operation. For instance, continuous integration helps to ensure that each build produces a functional, deployable artifact, multiple environments provide arenas for comprehensive testing, and red/black deployment takes most of the sting out of a failed deployment by allowing fast failure and rollback.


We use all of these strategies to deploy and maintain our cloud infrastructure, but the most critical is probably the automated red/black deployment behavior incorporated into the PaaS customer app deployment logic, which deploys a new customer app to the environment, and only load balances over and shuts down the old application if the new one passes a health check. When DevOps needs to migrate customer apps off of failing infrastructure or out of a degraded AZ, we leverage the same functionality to seamlessly redeploy it to a healthy container.


Availability For All


There really is no black magic required in order to set up a redundant, resilient and highly available architecture in AWS (or your public cloud provider of choice). As you can see from our setup, the accessibility of advanced IaaS platform services and high quality open source and SaaS tools allows any organization to create an automated and dependable tool chain that can manage entire infrastructure ecosystems in a reliable fashion.






from ReadWrite http://readwrite.com/2013/11/08/stability-in-an-uncertain-world-adding-a-nine-to-your-cloud-platform-availability

via

Microsoft Narrowing Its List Of CEO Candidates

Reuters is reporting that the candidate list of replacements for Chief Executive Steve Ballmer at Microsoft has supposedly been narrowed to five people.


Citing sources in the know, the short list includes Ford CEO Alan Mulally as an external candidate. Internal candidates Former Nokia CEO, now Vice President of Microsoft's Devices & Services business unit, Stephen Elop; former Skype CEO and current Executive VP of Microsoft's Business Development and Evangelism group Tony Bates; and Satya Nadella, Executive VP of Microsoft's Cloud and Enterprise group.



Mulally, for his part, has demurred any speculation that he intends to leave Ford and join the Microsoft team. But that doesn't mean he won't pack up and move from Michigan to Washington if the price is right.



Elop seems to be the inside favorite for the CEO gig, since he has prior Microsoft experience, and the CEO of Nokia line on his resume isn't hurting his chances, either.


Image courtesy of Reuters/Thomas Peter.






from ReadWrite http://readwrite.com/2013/11/06/microsoft-narrowing-its-list-of-ceo-candidates

via

IBM Tries To Put Twitter In Patent Cage

This may have been the day that IBM actually started to look desperate.


In an update to its S-1 filing prior to its initial public offering some time this week, Twitter somewhat casually revealed that IBM has notified the social media company that it is infringing on three of IBM's patents.


The nugget of information was buried deep in the S-1 form's Risk Factors section, one of the required tools to give investors a realistic look at the potential value of Twitter as a company.



From time to time we receive claims from third parties which allege that we have infringed upon their intellectual property rights. In this regard, we recently received a letter from International Business Machines Corporation, or IBM, alleging that we infringe on at least three U.S. patents held by IBM, and inviting us to negotiate a business resolution of the allegations. The three patents specifically identified by IBM in the letter were U.S. Patent No. 6,957,224: Efficient retrieval of uniform resource locators, U.S. Patent No. 7,072,849: Method for presenting advertising in an interactive service and U.S. Patent No. 7,099,862: Programmatic discovery of common contacts.




It seems clear that, for now, patent discussion is just that—a discussion, not a full-blown lawsuit. The invitation from IBM to negotiate a business resolution is a clear sign that IBM just wants some licensing fees for these patents. How much, of course, remains to be seen. But the mention within Twitter's S-1 would seem to indicate that this communication has some significance.



Twitter is holding its cards close here, taking a strong but neutral stance on what it wants to do.



Based upon our preliminary review of these patents, we believe we have meritorious defenses to IBM’s allegations, although there can be no assurance that we will be successful in defending against these allegations or reaching a business resolution that is satisfactory to us.



Slipping mention of IBM's letter into its S-1 filing was a bit cheeky on the part of Twitter. Normally these kinds of patent discussions are kept under wraps with non-disclosure agreements and other legal tools that keep any parties involved from letting the outside world what's going on. In 2011, for instance, Microsoft wouldn't even tell Barnes and Noble what patents the bookseller was supposedly violating until they signed an NDA.


That IBM apparently did not ask for similar gag restrictions of Twitter is telling, since they apparently want to keep things above board. But why approach Twitter in the first place? IBM dwarfs Twitter in size and revenue, and even if Twitter is violating Big Blue's patents, it would seem like small potatoes for IBM to stick their hands into Twitter's business.


See also: Oracle Claims Second-Largest Software Company Title


But IBM has seen better days, and it is not hard to make the leap that the company is seeking alternative forms of software licensing revenue, particularly after Oracle's recent spin that IBM has been knocked from the number-two software company spot. Upping its patent licensing operations would be one way of doing that.


This would seem to be desperation on the part of IBM, but for all we know, it could be business as usual. Again, many times these kinds of patent licensing deals are handled behind closed doors with little fanfare. Unless you're Microsoft, trying to endlessly score points against Linux- and Android-based products.


It's an interesting glimpse into the vast scope of patent litigation, though, a semi-shadowy world where millions of dollars can change hands with just a conversation between lawyers.


Image courtesy of Shutterstock.






from ReadWrite http://readwrite.com/2013/11/05/ibm-tries-to-put-twitter-in-patent-cage

via

ReadWriteContest: Google Glass Explorer Invitation Contest

Last Monday, when Taylor Hatmaker arrived at the ReadWrite offices for our staff retreat, everyone was very happy to see her. Some of us had never met her in person, so it was truly a historic day.


But we would be lying that we didn't have ulterior motives in finally meeting our very cool colleague ... word had been out all morning that current Google Glass Explorers were being given three invitations to let more people into the program. Truly there was sucking up to be done.


But our leader Owen Thomas had a better idea: Hatmaker could donate one of her invitations to ReadWrite and have her find a suitable candidate for the invitation. She agreed, and here we are.


Starting today, ReadWrite readers can submit their best idea for a Google Glass app or service. Our criteria is simple: it should be ideas that effect positive social change and utilizes the best features of Glass. Give us a 100-word or less proposal with the #rwglass hashtag on this Google+ thread by Thursday, November 8 at noon, Pacific Standard Time about your idea. Our panel will judge the best entry and notify the winner that they have received an invitation to the Google Glass Explorer program.


There are some additional rules. According to the Glass Explorer program, all Glass Explorers must:



  • Be a U.S. resident

  • Be 18 years or older

  • Purchase Glass (which is, currently, US$1500)

  • Provide a U.S.-based shipping address OR pick up the device in New York, San Francisco or Los Angeles.


Be sure you meet that criteria, especially the part where you will be financially responsible for buying Google Glass. If you do, feel free to enter the contest and tell us how you can make the world a better place with your Glassware app.






from ReadWrite http://readwrite.com/2013/11/04/readwritecontest-google-glass-explorer-invitation-contest

via

Why Penetration Testing and Vulnerability Assessment Is Important

Vulnerability assessment is a process run to detect, identify and classify the security loopholes in computers, websites, networks, information technology systems and also in communication systems. A minor loophole in your network can put your entire system at risk and let all your information out. The loopholes allow third parties and others to access and illicitly steal and exploit the database and information of your entire network system. Vulnerability is a process that is not much active and uses software tools for analysis.
Penetration testing however is an active process and require ethical hackers with profound knowledge of networking and hacking. A major difference between script kiddies and ethical hackers are that, script kiddies misuse the information and database for personal gain where as ethical hackers run the testing to find the loopholes and cover them up. In penetration testing, a security team is hired. The members of this security team are highly skilled, experienced and can be trusted. Many of them are certified ethical hackers. They ensure the integrity of the network and are coached to use similar methods that the computer hackers implement to get unlicensed access to the system. The professional experts then make the company aware of their weakness and what can be done to prevent from intruding and making the data public. Several ethical hacking institutes recruit experienced and skilled testers to prevent your network from a security breach.
Hiring a certified ethical hacker can protect and defend your network and computer from external attacks. The magnitude of damage done to your business and network systems entirely depends on the hackers. If vulnerability is major then hackers can cause major damage to the site. Gaining access to the internal and secretive database can down the website and literally deface the company. To get access to the network hackers inject Trojan viruses, horses or worms. Consequently, it slows down your network or may even shut down your website. It is a potential loss for the business owners, employees, clients and customers.
Going for a penetration testing is essential in every aspect. It is an investment and not an expense. Hackers look for loopholes in networks in order to steal database of a company. Fraudulence of credit cards purchase and then billing them on customers' account is a usual matter. Therefore, penetration testing is mandatory as it prevents your network from a security breach. The report's release the vulnerabilities found during the testing. If a vulnerability scanner is used it can successfully recognize vulnerabilities in Linux and Windows.
Kylie Taylor Is a consistent web article writer and gives authentic information on Ethical Hacking Institutes and ethical hacking courses. You can get a complete guide for studying ethical hacking and details of courses from Indian school of ethical hacking.

Augmented Reality - Useful Information To Know

Basically, augmented reality can be considered as a modern type of virtual reality. When there is simulation of physical aspects of real world with imaginary thoughts using touch, sight and sound creating computer generated 3D settings, it is known as virtual world. Imagine you are in a store and see products using 3D system on the computer screens where you can point any product and move it in every angle. This is a marvelous experience indeed! This is the latest technology ruling the marketing world, and since past four years it has been given the name as Augmented Realty. Presently, AR is not just limited to be a promotional tool, as it is now creating brand and building customer relationship. Almost every company has started using AR as a major tool for introducing products/services in the market and to create their own brand image.
Due to the lack of academic literature as well as research studies in the area of Augmented Reality marketing, this article will begin by emphasizing on the little research as well as small number of research papers associated with experimental marketing that creates the basis and elements of this research study. Economists hypothesize that the modern world connects itself strongly to the elements of "experience economy", which means that customers are more inclined towards experimental consumption. In this type of behavior, customers usually consider functional utility as an irrelevant aspect. This is the stage where experiential marketing comes into effect and treats consumption as a kind of holistic experience as well as recognizes consumption's rational and emotional drivers.
The significance of experiential marketing is seen as a method of building value for the end consumers, which in turn would provide an added advantage to the companies, particularly in the future. In addition, it will also encourage consumers to make quicker and more optimistic purchase decisions. Nevertheless, even though the new advertising orientation is broadly agreed to symbolize the future of marketing, it is still not completely understandable. And, for this reason it needs more assorted range of research techniques in order to understand the consumers in a better way.
The making of experiential value hinting towards consumer's view point on service/products through direct or indirect scrutiny has been lately shown by two quantitative researches. The two studies focused on US brands and their consumer's view points in the market of Taiwan. In addition, these researches have demonstrated that the experimental value build can attract consumer satisfaction Nevertheless, more studies is required to reproduce their outcomes on other cultures as well. And, it is also necessary in order to further look into the links revealed through qualitative researches. The link between consumer satisfaction and values is additionally endorsed by various research outcomes which suggest that experiential advertisement should deliver functional value, emotional value, and positive consumer satisfaction as well.
Despite a clear and broad agreement on the straight link between consumer satisfaction and value, there is no consensus when it comes to the elements that build up the consumer's perceived value. On the other hand, if the consumers are satisfied with a particular product then usually they would purchase the product again as well as stay away from purchasing from rivals in the market. Consumer satisfaction is mostly viewed from two different viewpoints, which are - cumulative aspects and transaction-specific. Even though the cumulative aspect of consumer satisfaction is believed an entire condition only after purchase, a recognized value takes place at different stages during the process of purchasing, which includes the pre-purchase stage as well.
Augmented reality is modernized version of VR or virtual reality. There are several benefits of Augmented Reality applications, some of them are:
• Though the originality of these applications diminish after much use, however at present the onus is with brands as they can now take advantage of this technology and create more products with ease. Augmented reality is getting increasingly popular with many companies, with every new creation there is possibility of additional advancement and exposure.
• With Augmented reality one can upload their version of media like images or other creativity. Companies can come up with their own piece of innovation for their users, one can choose from images and videos as they include x-factor.
• Companies can improve their creation and share them with networks; one can even include attractive contents.
• It is pretty common that many users do not have the ability to create professional videos; however with augmented reality apps one can reach to a greater extent.
• When it comes to high quality of content and other aspects, these apps are highly satisfactory. One needs to try them to know their usability and product-ability.
Augmented reality experiential marketing would be considered to primarily affect the stage of pre-purchase. In this step, as per the purchasing decision making route the customer is analyzing their selections before making the final decision. Customers can browse through a wide variety of products before purchasing including those that are out of stock. They have option to select from large quantities. Advices, recommendations and inspiration are available always for customers.

PCI DSS Version 3.0: New Standard But Same Problems?

"Cardholder data continues to be a target for criminals. Lack of education and awareness around payment security and poor implementation and maintenance of the PCI Standards leads to many of the security breaches happening today" PCI SSC 'PCI DSS 3.0 Change Highlights' - August 2013
Card data theft is still happening so the third revision of the PCI Data Security Standard is as much a re-launch as a revamp.
Many organizations - even Level 1 Merchants - have yet to fully implement all requirements of the PCI DSS V2 or previous versions of the standard, so eyes may well be rolling at a new version of a standard which hasn't yet been mastered in its previous forms.
This new version is more about refinement and clarification than any introduction of new techniques or technologies to help protect against card data theft, but while losses through card fraud are still on the increase, it is clear that something has to change.
How large is the problem?
In terms of the losses being experienced, you can see why card brands, issuers and banks would still be desperate for better care and attention to be applied to their card numbers. $11Billion was lost last year and that amount is increasing yearly. Bearing in mind that the total value of card payment transactions now exceeds $21 Trillion annually, there is still plenty of money being made from the provision of fast guaranteed payment products. However, any initiative that reduces that $11 Billion loss is worthy of some time and attention. From the most recent Nilsson Report on card fraud:
"Card issuer losses occur mainly at the point of sale from counterfeit cards. Issuers bear the fraud loss if they give merchants authorization to accept the payment. Merchant and acquirer losses occur mainly on card-not-present (CNP) transactions on the Web, at a call center, or through mail order"
PCI compliance isn't just a card-brand problem that results in your organization having to spend time and money on, but is a way to protect your organization directly from serious risk. This isn't simply a financial risk either: other factors such as brand protection and customer trust are also lost when a breach occurs.
PCI DSS Version 3.0 - Stick or Twist?
The new version of the PCI DSS isn't available until early next month so this is an early reveal of what is quite an extensive re-working of the standard. Most of the requirements are carried over with some tweaks and additions which will be covered later but there is also a degree of refinement in the wording throughout the standard.
The overall intention is that the standard aims to promote thinking about security of cardholder data rather than simply driving compliance with the standard. The Security Standards Council are, of course, keen that security best practices are adopted and practiced as a matter of routine rather than just as a 'once-a-year, big-push-to-keep-an-auditor-happy' event - as if anyone would do that? J
New items will be considered "best practices" until June 2015, after which they will become official requirements. Furthermore, any organization compliant with PCI DSS 2.0 can stick until January 2015 before adopting the new version of the DSS.
What Has Changed in PCI DSS V3?
So what are the specific changes or new requirements? There are wording changes throughout to encourage more routine focus on the PCI DSS requirements, but there are some detail changes and clarifying language that we can highlight here.
Requirement 2: Vulnerability Management and Hardening
Requirement 2 has always mandated the need to harden server, EPOS, and network device configurations, removing default settings as a minimum, but encouraging the adoption of a NIST or CIS hardening checklist. Detail changes for Version 3 make pass phrases acceptable. Pass phrases make a good alternative to long, complex passwords, being easier to manage and remember, but with equivalent security protection. Hardening, vulnerability management and configuration control is one of the NNT 'strong hands', and more detail is available on our website.
Requirement 6: Develop Secure Applications
6.5.6 - Insecure Handling of PAN and SAD in Memory
Just like with Buffer Overflow Protection and SQL Injection Attack mitigation, this is an appeal for application designers to be on their guard. This requirement is aimed specifically at defending against memory scraping malware, and to design in safety features so that CHD and Secure Authentication Data is protected.
The call is to take a step back and consider using programmatic features that prevent unauthorized applications from accessing memory (some development environments are better than others for this). What happens to CHD or SAD during a program crash? (Many attacks take the form of disruption to the application in order to make it 'cough up' or dump data). Where possible, can the application completely erase data when no longer needed?
In other words, this is partly an application development challenge (hence being a Requirement 6 item) but also a malware protection issue too. An attacker will need a Trojan or other Malware to scrape memory, so low level FIM can play a part in underwriting coded-protection. In summary, get ready for some more challenging questions from your QSA, so ask your EPoS/eCommerce app providers or in house development team now what they make of this requirement. Potentially this will also prove to be a difficult requirement for a QSA to validate.
6.5.11 - Broken Authentication and Session Management
The detail of this new requirement appears to be asking merchants to mitigate the risk involved with client-side takeovers: assume that trusted clients could become attack vectors. Client-side attacks are one of the most common ways hackers get access to data and as ever, hackers will go for the weakest link. The requirement also intends to put focus on man-in-the-middle style attacks as well.
Interestingly there is also a suggestion that merchants who use re-directed services (like Worldpay for example) may also need to examine their application session management operation for vulnerabilities.
Primarily this is an application design issue (Requirement 6 prefix is a giveaway J ). It highlights a common 'vulnerability vs. functional' balance that is tolerated by developers because implementation can create user experiences that are compromised. For example, it is not going to improve sales from a retail web site if, when a customer leaves their shopping cart pre-checkout momentarily, they return to a "session timeout" message. OWASP knowledgebase is your go-to resource for development mitigation.
Requirement 8: Always Use Unique User IDs
8.5.1 - Unique Authentication Credentials for Service Providers
Standard security best practices within and outside of the PCI DSS are to always use unique access credentials for EVERYTHING so you know who is the perpetrator when something untoward takes place. It's just standard, good practice.
However the need for this to be explicitly highlighted as a requirement suggests that service providers need a reminder that this does apply to them too. Most service providers will be operating securely but they still need to take the same basic precautions and ensure they are using unique credentials (and not just 'customername+administrator as a username either!)
Requirement 9: Physical Security
9.9 - Protection of Point-of-Sale (POS) Devices from Tampering
Based on cardholder data theft statistics, card skimming and more elaborate variants thereof targeted on the POS equipment are still widespread. This is the ying to the yang of the previously covered, highly technical requirements, reminding Merchants that 'low tech' crime still works too.
Requirement 9 has always been intended to convey the message of 'don't let anyone touch any of the cardholder data processing equipment'. The Version 3 clarification here explicitly highlights protection of endpoints, leading to the conclusion that Requirement 9 has generally been interpreted as - rightly - being strongly oriented towards the 'central site' data center, but at the expense of focus on POS systems.
Requirement 11: Test Security
11.3 Develop and Implement a Methodology for Penetration Testing
This is another 'new' requirement that exists to emphasize focus on one of the standard practices that everyone already complies with, but maybe doesn't do it as well as they might. A classic case of meeting the letter, but not the spirit, of the requirement.
It appears that the market for Pen Testing has become highly commoditized with most vendors offering cost-engineered, highly-automated services. This inevitably has led to tests becoming more superficial (more 'checkbox approach to compliance') so this new requirement is a 'tug on the leash', forcing the merchant to avoid bad habits and corner-cutting.
This is something very key to the NNT methodology anyway, in that we advocate that classic Security Best Practices are operated, which in turn help to minimize the 'boom and bust' approach to vulnerability management that the PCI DSS sometimes engenders.
For example, running annual or quarterly scans, then having to drop everything for a week in order to patch and re-configure devices before repeating the process 3 months later not only makes life hard, but may also render you unsecure for months at a time. NNT works on a continuous basis to continually track changes to devices and allow you to operate more of a 'trimming' process to vulnerability management. This approach is more effective, gentler on the network and hosts, and easier on your resources too!
Requirement 12: Maintain a Security Policy
12.9 - Additional Requirement for Service Providers on Data Security
And finally, a clarification of Requirement 12 concerning the use of Cloud or Managed Security Services. The intention is to ensure that service providers properly understand and operate their PCI requirements fully. The DSS places the onus on the merchant to ensure they have a statement acknowledging this and, in turn, Merchants should be indemnified of cardholder data protection by their service provider.
Conclusion
In summary, while there are new requirements, some of which may prove to be challenging to implement and test, nothing changes in terms of intent.
Data security has to be a full-time focus, requiring high levels of operational discipline, with checks and balances to ensure security is being maintained. The PCI DSS attempts to convey this, but has always fallen victim to the need to educate, clarify and mandate security best practices. Data Security isn't an easy thing to describe or summarize, hence the DSS has ended up with 650 sub-requirements that the Merchant or Payment Processor find complex and ambiguous.
Technology can help, and the opportunity exists to implement highly automated solutions to the bulk of PCI requirements that are neither expensive, nor difficult, to implement and run.
And this new version of the DSS, with greater emphasis on making security a regular habit, is squarely in line with this. In fact, you could simplify the majority of the PCI DSS down to the following steps:
  • Implement basic perimeter and endpoint security with Firewalls, IPS and Anti-Virus
  • Audit Servers, Databases and Network Devices against NIST or CIS hardening checklists to eliminate vulnerabilities (use your FIM system for this)
  • Once devices have been hardened, implement continuous vulnerability monitoring, with real-time malware detection (in other words, real-time File Integrity Monitoring)
  • Instigate configuration change control to ensure devices remain secure at all times (FIM again), patch all systems monthly
  • Underpin processes with logging and SIEM as a checks and balances audit trail, with regular pen testing and ASV vulnerability scans
Take these steps, and you'll not just be ahead of the curve for PCI DSS Version 3.0, but probably Version 4.0 too.
NNT is a Manufacturer of Data Security solutions with a focus on helping organizations reduce risk & gain compliance with standards such as PCI DSS, Sarbanes Oxley, HIPPA and ISO 27k. Our software combines Device Hardening, SIEM Solution, File Integrity Monitoring and Change & Configuration Management in one easy to use and affordable solution.