For those of us in advertising circles who have seen firsthand how Artificial Intelligence (AI) and machine learning can help us do our jobs better, it is natural to wonder whether AI-enabled computers will ultimately replace humans for critical functions in the industry.
Already, computers are replacing jobs in business across many industries, whether its finding ways to automate the assembly line for manufacturing plants or taking lunch orders at your local McDonald’s. In fact, a report from the White House issued last December forewarns that AI could threaten up to 47 percent of US jobs in the next two decades.
There’s no denying the fact that computers are simply better than humans at some advertising tasks. AI not only automates many of the labor-intensive processes that were previously handled by humans, but it performs them far more effectively due to its ability to analyze large volumes of data and make much faster decisions than any human ever could.
Nowhere is this capability more obvious than in the role of a media buyer. Traditionally, media buyers have had to handle several labor- and data-intensive tasks: understanding the whims of the market and adjusting bid rates accordingly, developing new creative that resonates with targeted audiences, and managing all of the placements where their campaigns run, to name a few.
And yet, today’s media buyers are tasked with managing campaigns that involve greater scale and exponentially more data points than ever before. With the media explosion caused by mobile apps, social networks, and other forms of digital entertainment, many media buyers are now managing a dizzying array of campaigns, placements and media partners compared to the past.
Given their ability to process data much faster than any human ever could, AI-enabled computers have replaced humans in managing many media buying functions -- to great effect. Whereas in the past media buyers have used rules, excel spreadsheets and queries to get the job done, savvy media buyers are now relying on AI tools and algorithms to do those same tasks.
Take targeting, for example. Programmatic advertising has become a nearly $33 billionindustry largely on the back of AI and machine learning. Media buyers find programmatic advertising so valuable because it uses AI to analyze immensely large sets of data about demographics, interests and purchasing preferences in order to determine the best audiences to target for a specific ad campaign. All of this happens in real-time, enabling media buyers to find a level of targeting and efficiency that was impossible before AI came along.
Or look at bid rate optimization. While media buyers can do do their best to keep tabs on what their top competitors are bidding and how rates fluctuate with the times, computers can monitor every company bidding on related search terms and identify fluctuations as they happen -- then make changes in real time. We are already seeing some of the major platforms roll out AI-based bidding optimization, such as with Facebook’s App Event Optimization, which uses machine learning to identify users that are likely to actually engage with an app or perform other valuable actions, not just install it. The solution even recommends how much to bid on those users by predicting their future value. Snapchat, meanwhile, launched Goal-Based Bidding (GBB) late last year.
AI is even beginning to infiltrate the world of ad creative, a realm that many considered the one area that humans, with all of their creativity and imagination, can do better than computers. Automaker Toyota, for instance, used the AI tools of IBM’s Watson to develop a series of customized ad creative for its Rav4 crossover. One example targeted consumers who had demonstrated an interest in both running and luge racing to serve ads encouraging them to try out a gameshow-like activity called ”Win Luge or Draw,” in which they draw pictures of movie scenes for team members to guess before the other team completes a 26.2 mile icy luge course.
All of this might lead you to believe that AI is on the cusp of replacing the media buying role. But we shouldn't be so quick to draw that conclusion. While computers are great at performing manual tasks that involve making quick sense of Big Data, they are no match for human intelligence when it comes to looking at the big picture and synthesizing effective strategies. Advertising is, at its heart, a creative endeavor, and there will always be roles for people who are able to see things in different ways, relate to audiences in earnest, and solve problems that others had given up trying to fix.
In today’s fast-moving landscape, approximately 80 percent of a media buyer’s time goes toward tasks to simply maintain a campaign’s performance, such as bidding, budgeting and creative optimization. The other 20 percent of their time goes to actually growing their campaigns, through strategies such as identifying new channels, analyzing user behaviors, hacking new creative, segmenting CRM databases, and so on.
In practical terms, this pitting of man versus machine represents a false dichotomy. While there are sure to be some jobs lost to computer automation along the way, it is wrong to assume that we are playing a zero-sum game.
By using AI and machine learning to automate a lot of the menial tasks that fall under their job descriptions, media buyers are able to free up their time to focus on other, higher-level objectives. Rather than spending half their day uploading new creative, running A/B tests, and adjusting Insertion Orders, media buyers can let computers handle all of that for them while they work on more strategic goals -- discovering new audiences, for instance, or breaking into new territories, brainstorming new creative, and so on.
Artificial intelligence is coming to the advertising industry whether we’re ready for it or not. In fact, it is already here -- not to steal our jobs, as some have forewarned, but to help us do those jobs better. If advertisers want to do their jobs more creatively and strategically, it’s time we let go of the “man vs. machine” mentality and embraced our new “teammates.”
by Peli Beeri
One of the increasingly popular remote access techniques to grant teleworkers access to internal corporate applications and data is to allow them to log into virtual desktops. While a virtual desktop infrastructure(VDI) can be operated on-premises, cloud-based VDI has plenty of benefits. Cloud-based remote access via VDI is often called desktop as a service(DaaS), and it takes away the upfront cost, buildout and management complexities from internal IT staff and offloads those duties to a cloud services provider. A properly tuned DaaS could be the most effective way to offer internal computing resources to remote workers around the globe.
If you prefer to use more locally deployed, software-based remote access technologies like IPsec or SSL virtual private networks (VPNs), the cloud can still assist. Many IT departments have discovered that moving their authentication mechanisms out of their private data centers and to cloud-based remote access allows for easier management and a more streamlined approach. If yours is like many organizations out there, you likely have some apps and data in the cloud and others in a private data center. Early hybrid cloud designs often left the authentication component in the private side of the network. However, now that most organizations are more comfortable with the security and stability of public cloud services, they have found that moving the end-user management and authentication to the cloud allows for a more centralized management experience for both publicly and privately hosted company resources.
In situations where staffers work out of small branch office or teleworkers work out of their homes, many companies are opting to build a different sort of remote access: a static, site-to-site VPN between the corporate LAN and the remote location of those end users. Connectivity still uses the internet for access, but the primary difference is that a hardware appliance.
is used on both sides of the VPN tunnel for automated authentication and encryption across the virtual tunnel. The benefit to the end user is that they are not required to manually authenticate each time they need to access a company resource. Instead, a site-to-site VPN acts as if it's simply an extension of the corporate LAN.
Previously, the high cost to deploy and remotely manage dozens or hundreds of site-to-site VPN tunnels led many IT departments to use site-to-site VPN deployments sparingly. But thanks to lower hardware costs -- and advancements in cloud management technologies -- offering static VPN tunnels to large numbers of teleworkers is now a reality. Several examples of this exist in the market, including the Cisco Meraki Z1teleworker gateway appliance that offers a low price point and a cloud-managed interface for ease of troubleshooting by corporate IT staff, as well as entry-level appliances from Fortinet and Checkpoint.
Finally, if you need traditional remote access services but would rather have someone else manage the entire architecture, you can go with a fully managed VPN provider. In this scenario, cloud-based remote access is achieved by allowing a cloud service provider to not only manage authentication but also the authorization, accounting and general maintenance of a standard remote access VPN service. Plenty of service providers offer VPN as a service including technology companies like MegaPath and Zscaler. Wireless carriers such as AT&T and Verizon also offer business-class remote VPN access services that primarily target mobile workforces that use smartphones and tablets to reach corporate resources.
by Andrew Froehlich
Questions? contact us
Internet connectivity over fiber-optic networks has become the gold standard for fast, high-quality data transmission for businesses. The relatively new nature of this technology can leave some hesitant to invest in it for their business.
Fiber relies on light instead of electricity to transmit data, which facilitates much faster Internet connections that are capable of handling higher bandwidth. According to the FCC, fiber providers consistently offer 117% of advertised speed, even during times of peak demand.
Business Advantages of Fiber Optic InternetWhile most business decision-makers are aware of the speed benefits of fiber, other advantages are less commonly understood. Spending on a newer technology can feel risky, especially for organizations who rely heavily on their Internet connectivity for customer communications, productivity, and collaboration.
In this article, you'll learn a bit more about the various ways fiber-optic Internet compares to standard copper cable, including bandwidth potential, speed, reliability, among other factors.
1. BandwidthInvesting in fiber-optic internet can significantly increase your bandwidth potential. Copper wire infrastructure and TDM technology are limited in nature. Because it was originally designed for transmitting voice calls only, the demand for bandwidth wasn't high. For instance, T-1 can only carry 1.5 Mbps of throughput. And because of how electrical signaling works, many types of connections over copper are limited by distance.
Ethernet over Copper service (EoC) is typically not available if the circuit is longer than 15,000 feet. For organizations considering shifting their voice communications to Voice-Over-IP (VoIP), having your bandwidth delivered over fiber can be an indispensable asset.
2. Upload/Download SpeedIs the speed increase of fiber-optic internet noticeable compared to copper? Absolutely.
Many Atlantech Online customers using fiber to connect to our network can transmit data at 1 gigabit per second. That's many times faster than the federal government's definition of broadband service, which is 25Mbps uploads and 3Mbps for downloads as of January 2015.
Tech blog NorthWest writes that downloads that take 22 minutes over most copper wire Internet connections, can take as little as 8 seconds on Internet connectivity delivered over fiber.
With this technological advancement, the concept of "waiting for things to load" is about to be a thing of the past.
3. DistanceThe signal for copper Internet networks degrades as the signal is carried from the central office (CO). Fiber was originally used for long haul networks. Cell phone towers in remote locations use fiber optic cable to connect towers to the network.
According to Blackbox Technology, certain types of fiber connections can be transmitted for almost 25 miles. While most business build outs won't require similarly robust types of fiber connections, your signal isn't in danger of degrading within metro fiber rings that would serve your business.
4. SecurityIn an era of increased attention towards cyber security, fiber-optic internet is touted as a cost-effective wayof instantly increasing your Internet security. Intercepting copper cable can be performed by connecting taps to a line to pick up the electronic signals.
Putting a tap on a fiber-optic internet cable to intercept data transmissions is incredibly difficult. It's also easy to quickly identify compromised cables, which visibly emit light from transmissions.
5. ReliabilityThere are a number of factors that can cause outages when an organization is reliant on copper cable-based internet. Temperature fluctuations, severe weather conditions, and moisture can all cause a loss of connectivity. Old or worn copper cable can even present a fire hazard, due to the fact it carries an electric current. Additional reliability concerns associated with copper include risks of interference from electronic or radio signals. Additionally, copper wires are accessed in the building by telephone company personnel and sometimes they can make mistakes and fiddle with the wrong wires. Also, copper wires all go back to the telephone company Central Office where disconnections can happen. Fiber is typically independent of the phone company, their equipment and their termination points.
6. Cable SizeThe speed of internet transmitted via copper cable is directly correlated with the weight of cable used. For a business to achieve a higher speeds, more cable must be used, which requires more space in a company's telecommunications room.
Fiber cable's speed is not connected to its size, and it's far lighter weight than copper. This renders it easier to use, and less demanding of limited space in small rooms.
7. CostInvesting in fiber internet will cost more than copper in the short term though costs are drastically decreasing as this option becomes more commonplace. Ultimately, the total cost of ownership (TCO) over the lifetime of fiber is lower. It's more durable, cheaper to maintain, and requires less hardware.The advantages of fiber make it overall, a more cost-effective investment for organizations of all sizes.
8. It's SturdierCopper cable is a relatively delicate technology. Typically, it can sustain about 25 pounds of pressure without being damaged, which means it can be compromised with relative ease during routine operations in a company's telecommunications space.
In contrast, fiber can withstand about 100-200 pounds of pressure, meaning it is far less likely to be damaged during routine operations in close proximity.
Investing in Fiber Optic InternetWhile organizational information technology needs can vary drastically, the benefits of fiber-optic internet are making it an increasingly common choice for business data transmission. Companies who choose to invest in fiber typically find that the total cost of ownership, bandwidth potential, and speed gains are noticeable.
by Tom Collins
For information Contact us
A fiber optic cable is a network cable that contains strands of glass fibers inside an insulated casing. They're designed for long distance, very high performance data networking and telecommunications.
Compared to wired cables, fiber optic cables provide higher bandwidth and can transmit data over longer distances.
Fiber optic cables support much of the world's internet, cable television and telephone systems.
How Fiber Optic Cables WorkFiber optic cables carry communication signals using pulses of light generated by small lasers or light-emitting diodes (LEDs).
The cable consists of one or more strands of glass, each only slightly thicker than a human hair. The center of each strand is called the core, which provides the pathway for light to travel. The core is surrounded by a layer of glass called cladding that reflects light inward to avoid loss of signal and allow the light to pass through bends in the cable.
The two primary types of fiber cables are called single mode and multi mode fiber. Single mode fiber uses very thin glass strands and a laser to generate light while multi mode fibers use LEDs.
Single mode fiber networks often use Wave Division Multiplexing (WDM) techniques to increase the amount of data traffic that can be sent across the strand. WDM allows light at multiple different wavelengths to be combined (multiplexed) and later separated (de-multiplexed), effectively transmitting multiple communication streams via a single light pulse.
Advantages of Fiber Optic CablesFiber cables offer several advantages over traditional long-distance copper cabling.
Some better known FTTH services in the market today include Verizon FIOS and Google Fiber. These services can provide gigabit (1 Gbps) internet speeds to each household. However, even though providers also offer lower cost, they typically also offer lower capacity packages to their customers.
It sometimes also refers to privately operated fiber installations. by Bradley Mitchell
For information contact us
The technologies making waves in 2017 include brain implants and quantum computers.
Here is a list of the top 10 technologies that are expected to be prevalent this year, according to MIT.
AI that learns like humans
At the top of the list is behavior-reinforced artificial intelligence.
Whether that’s mastering the complex game of Go and beating a champion or learning to merge a self-driving car into traffic.
The technology is based on reinforcement learning, documented more than a 100 years ago by psychologist Edward Thorndike. He showed that cats eventually learned how to escape from a box with a latched door by trial-and-error. That behavior was reinforced with reward (food) and eventually became an established behavior.
Availability: 1 to 2 years
360-degree cameras for everyone
People experience the world in 360 degrees -- now consumer cameras can too.
Until recently, that wasn’t the case: it used to cost thousands of dollars to build a system that that replicated a 360 experience. Today, you can grab a good 360-degree camera for under $500.
The key is using the technology in a way that doesn’t bore your friends and family. Interesting applications include journalists using low-cost 360 cameras to document news, including this New York Times video that can be panned 360 degrees showing the devastation left by ISIS in Palmyra, Syria.
Gene therapy for curing hereditary disorders
This is best illustrated in the case of a baby boy who had serious immune deficiency that forced his parents to wear surgical masks and boil toys in water.
They believed the only option was to get a bone marrow transplant but learned about therapy that replaced the gene that was destroying his immune system. It worked and the baby was cured.
Availability: 10 to 15 years
Solar Cells that are twice as efficient
So-called "hot" solar cells convert “heat to focused beams of light.”
The operative phrase here is that it could be “roughly twice as efficient as conventional photovoltaics” and lead to cheap solar power that keeps working at night.
Availability: 10 to 15 years
A map of every human cell type
This could reveal “a sophisticated new model of biology” that speeds the search for drugs. Research suggests that there are about 300 cell variations but the “true figure is undoubtedly larger.”
This will allow discovery of new cell types and accelerate testing of new drugs.
Availability: 5 years
We’ve heard lots about self-driving cars – but trucks? One idea is for these future trucks to drive autonomously on long highway stretches when drivers might not be alert.
Broader application is convoys that “platoon” together to cut down on wind drag and save on fuel costs.
Availability: 5 to 10 years
Pay by face
A flick of your Apple Watch to pay at Starbucks is already doable in the real world. The next step may be face recognition that is “finally accurate enough to be widely used in financial transactions and other everyday applications.”
Baidu, China’s most popular search engine, is working on a system that lets people buy rail tickets with a face scan.
The first thing to understand about quantum computers is that they’re not easy to explain.
The upshot is that these computers, using quantum bits, can crunch certain very complex calculations much faster than traditional computers.
Availability 4 to 5 years
In an experiment, a monkey regained movement in a paralyzed leg via man-made electronic interfaces. Essentially, these interfaces “bypass damage” to the nervous system.
The obvious application is people who suffer paralyzing injuries.
Availability: 10 to 15 years
Botnets of Things
This isn’t a good thing. It’s malware that “takes control of webcams, video recorders, and other consumer devices” to wreak chaos on the Internet.
“Botnets based on this software are disrupting larger and larger swaths of the Internet—and getting harder to stop.”
For more information contact us
ANY sufficiently advanced technology, noted Arthur C. Clarke, a British science-fiction writer, is indistinguishable from magic. The fast-emerging technology of voice computing proves his point. Using it is just like casting a spell: say a few words into the air, and a nearby device can grant your wish.
The Amazon Echo, a voice-driven cylindrical computer that sits on a table top and answers to the name Alexa, can call up music tracks and radio stations, tell jokes, answer trivia questions and control smart appliances; even before Christmas it was already resident in about 4% of American households. Voice assistants are proliferating in smartphones, too: Apple’s Siri handles over 2bn commands a week, and 20% of Google searches on Android-powered handsets in America are input by voice. Dictating e-mails and text messages now works reliably enough to be useful. Why type when you can talk?
This is a huge shift. Simple though it may seem, voice has the power to transform computing, by providing a natural means of interaction. Windows, icons and menus, and then touchscreens, were welcomed as more intuitive ways to deal with computers than entering complex keyboard commands. But being able to talk to computers abolishes the need for the abstraction of a “user interface” at all. Just as mobile phones were more than existing phones without wires, and cars were more than carriages without horses, so computers without screens and keyboards have the potential to be more useful, powerful and ubiquitous than people can imagine today.
Voice will not wholly replace other forms of input and output. Sometimes it will remain more convenient to converse with a machine by typing rather than talking (Amazon is said to be working on an Echo device with a built-in screen). But voice is destined to account for a growing share of people’s interactions with the technology around them, from washing machines that tell you how much of the cycle they have left to virtual assistants in corporate call-centres. However, to reach its full potential, the technology requires further breakthroughs—and a resolution of the tricky questions it raises around the trade-off between convenience and privacy.
Alexa, what is deep learning?
Computer-dictation systems have been around for years. But they were unreliable and required lengthy training to learn a specific user’s voice. Computers’ new ability to recognise almost anyone’s speech dependably without training is the latest manifestation of the power of “deep learning”, an artificial-intelligence technique in which a software system is trained using millions of examples, usually culled from the internet. Thanks to deep learning, machines now nearly equal humans in transcription accuracy, computerised translation systems are improving rapidly and text-to-speech systems are becoming less robotic and more natural-sounding. Computers are, in short, getting much better at handling natural language in all its forms (see Technology Quarterly).
Although deep learning means that machines can recognise speech more reliably and talk in a less stilted manner, they still don’t understand the meaning of language. That is the most difficult aspect of the problem and, if voice-driven computing is truly to flourish, one that must be overcome. Computers must be able to understand context in order to maintain a coherent conversation about something, rather than just responding to simple, one-off voice commands, as they mostly do today (“Hey, Siri, set a timer for ten minutes”). Researchers in universities and at companies large and small are working on this very problem, building “bots” that can hold more elaborate conversations about more complex tasks, from retrieving information to advising on mortgages to making travel arrangements. (Amazon is offering a $1m prize for a bot that can converse “coherently and engagingly” for 20 minutes.)
When spells replace spelling
Consumers and regulators also have a role to play in determining how voice computing develops. Even in its current, relatively primitive form, the technology poses a dilemma: voice-driven systems are most useful when they are personalised, and are granted wide access to sources of data such as calendars, e-mails and other sensitive information. That raises privacy and security concerns.
To further complicate matters, many voice-driven devices are always listening, waiting to be activated. Some people are already concerned about the implications of internet-connected microphones listening in every room and from every smartphone. Not all audio is sent to the cloud—devices wait for a trigger phrase (“Alexa”, “OK, Google”, “Hey, Cortana”, or “Hey, Siri”) before they start relaying the user’s voice to the servers that actually handle the requests—but when it comes to storing audio, it is unclear who keeps what and when.
Police investigating a murder in Arkansas, which may have been overheard by an Amazon Echo, have asked the company for access to any audio that might have been captured. Amazon has refused to co-operate, arguing (with the backing of privacy advocates) that the legal status of such requests is unclear. The situation is analogous to Apple’s refusal in 2016 to help FBI investigators unlock a terrorist’s iPhone; both cases highlight the need for rules that specify when and what intrusions into personal privacy are justified in the interests of security.
Consumers will adopt voice computing even if such issues remain unresolved. In many situations voice is far more convenient and natural than any other means of communication. Uniquely, it can also be used while doing something else (driving, working out or walking down the street). It can extend the power of computing to people unable, for one reason or another, to use screens and keyboards. And it could have a dramatic impact not just on computing, but on the use of language itself. Computerised simultaneous translation could render the need to speak a foreign language irrelevant for many people; and in a world where machines can talk, minor languages may be more likely to survive. The arrival of the touchscreen was the last big shift in the way humans interact with computers. The leap to speech matters more.
For technology information Contact us
Personally, I’m amazed at the technology we have available to us. It’s astounding to have the power to retrieve almost any information and communicate in a thousand different ways using a device that fits in your pocket.
There’s always something new on the horizon, and we can’t help but wait and wonder what technological marvels are coming next.
The way I see it, there are seven major tech trends we’re in store for in 2017. If you’re eyeing a sector in which to start a business, any of these is a pretty good bet. If you're already an entrepreneur, think about how you can leverage these technologies to reach your target audience in new ways.
1. IoT and Smart Home Tech.
We’ve been hearing about the forthcoming revolution of the Internet-of-Things (IoT) and resulting interconnectedness of smart home technology for years. So what’s the holdup? Why aren’t we all living in smart, connected homes by now? Part of the problem is too much competition, with not enough collaboration—there are tons of individual appliances and apps on the market, but few solutions to tie everything together into a single, seamless user experience. Now that bigger companies already well-versed in uniform user experiences (like Google, Amazon, and Apple) are getting involved, I expect we’ll see some major advancements on this front in the coming year.
2. AR and VR.
We’ve already seen some major steps forward for augmented reality (AR) and virtual reality (VR) technology in 2016. Oculus Rift was released, to positive reception, and thousands of VR apps and games followed. We also saw Pokémon Go, an AR game, explode with over 100 million downloads. The market is ready for AR and VR, and we’ve already got some early-stage devices and tech for these applications, but it’s going to be next year before we see things really take off. Once they do, you’ll need to be ready for AR and VR versions of practically everything—and ample marketing opportunities to follow.
Subscribe Now: Forbes Entrepreneurs & Small Business Newsletters
All the trials and triumphs of building a business – delivered to your inbox.3. Machine Learning.
Machine learning has taken some massive strides forward in the past few years, even emerging to assist and enhance Google’s core search engine algorithm. But again, we’ve only seen it in a limited range of applications. Throughout 2017, I expect to see machine learning updates emerge across the board, entering almost any type of consumer application you can think of, from offering better recommended products based on prior purchase history to gradually improving the user experience of an analytics app. It won’t be long before machine learning becomes a kind of “new normal,” with people expecting this type of artificial intelligence as a component of every form of technology.
Marketers will be (mostly) pleased to learn that automation will become a bigger mainstay in and throughout 2017, with advanced technology enabling the automation of previously human-exclusive tasks. We’ve had robotic journalists in circulation for a couple of years now, and I expect it won’t be long before they make another leap into more practical types of articles. It’s likely that we’ll start seeing productivity skyrocket in a number of white-collar type jobs—and we’ll start seeing some jobs disappear altogether. When automation is combined with machine learning, everything can improve even faster, so 2017 has the potential to be a truly landmark year.
5. Humanized Big Data. (visual, empathetic, qualitative)
Big data has been a big topic for the past five years or so, when it started making headlines as a buzzword. The idea is that mass quantities of gathered data—which we now have access to—can help us in everything from planning better medical treatments to executing better marketing campaigns. But big data’s greatest strength—its quantitative, numerical foundation—is also a weakness. In 2017, I expect we’ll see advancements to humanize big data, seeking more empathetic and qualitative bits of data and projecting it in a more visualized, accessible way.
6. Physical-Digital Integrations.
Mobile devices have been slowly adding technology into our daily lives. It’s rare to see anyone without a smartphone at any given time, giving us access to practically infinite information in the real-world. We already have things like site-to-store purchasing, enabling online customers to buy and pick up products in a physical retail location, but the next level will be even further integrations between physical and digital realities. Online brands like Amazon will start having more physical products, like Dash Buttons, and physical brands like Walmart will start having more digital features, like store maps and product trials.
7. Everything On-Demand.
Thanks to brands like Uber (and the resulting madness of startups built on the premise of being the “Uber of ____”), people are getting used to having everything on demand via phone apps. In 2017, I expect this to see this develop even further. We have thousands of apps available to us to get rides, food deliveries, and even a place to stay for the night, but soon we’ll see this evolve into even stranger territory.
Anyone in the tech industry knows that making predictions about the course of technology’s future, even a year out, is an exercise in futility. Surprises can come from a number of different directions, and announced developments rarely release as they’re intended.
Still, it pays to forecast what’s coming next so you can prepare your marketing strategies (or your budget) accordingly. Whatever the case may be, it’s still fun to think about everything that’s coming next. By Jayson DeMers
For Information Contact us
If you’re considering a cloud solution for your business, you’ve likely explored both public and private cloud options or even discovered the hybrid cloud, a blend of both cloud environments. Moreover, there’s an overwhelming amount of information online, leaving you puzzled over determining which model is right for you.
The cloud model you choose depends on which features you find most important and how much you’re willing to invest. While the more cost-effective public cloud is easy to manage and offers increased scalability, a private cloud provides greater control and heightened security for mission-critical data and applications. Hybrid cloud brings the best of both worlds, merging public and private cloud for lower total cost of ownership (TCO), with enhanced security, scalability and management features.
Ask yourself these questions to determine which solution will best serve your business needs:
Alternatively, if you answered no, a fully public cloud offers you an affordable solution that scales up or down as needed, leaving room for growth for your small or midsize company. With no server or infrastructure investment, a public cloud cuts costs associated with initial deployment, software licensing fees, and dedicated IT personnel.
Somewhere in between? A hybrid cloud can provide a low-cost transition strategy for more cloud-based operations later, or act as a long-term solution that guarantees sensitive data stays on your premises while separately creating a resilient, scalable solution, giving your organization room to grow. Choosing a hybrid model allows you to benefit from lower TCO and quicker results, without compromising sensitive data availability or compliance needs.
Designing your hybrid cloud
Many organizations will decide a hybrid cloud is right for them, but that opens a more difficult question: Which applications should be hosted publicly vs. privately?
The questions above can help determine which applications to host where. The scale of your internal resources for management from both a staff and investment perspective will determine how many applications can be hosted privately or publicly.
From there, examine the needs for each application you’d like to host in the cloud. If you need more control over the data the application contains, a private cloud may work well. If you think a particular application is going to need to grow (or shrink) anytime in the future, the scalability of the public cloud meets these needs.
You’ll also want to consider your availability needs. Many public cloud providers offer service-level agreements that guarantee certain levels of uptime. A private, self-managed cloud may not ensure the same. If downtime for a particular application is problematic, a public cloud option may be most suitable. While this leaves you entirely reliant on the provider to fix problems should services go down, a private cloud’s downtime would be your internal responsibility. Be sure to consider whether you have the resources to address downtime issues if you opt for the private option for crucial applications.
Finally, be sure you understand each application’s security needs. Based on your risk of an attack, you must determine how the data you plan to store in the cloud will be secured. Some applications may need to remain on-premises or in a private cloud due to security and regulatory requirements, while other data can be more easily stored in the cloud.
As you evaluate the options for your next cloud deployment, make sure to take both your business’ continued evolution and overall readiness into consideration. Hybrid cloud will make it easy to test the waters without fully migrating over, which makes it just as important to find a cloud vendor that allows you to move freely between private, public and hybrid deployment models. Depending on your business requirements and goals, make sure you carefully weigh this decision with a vendor that will help you tailor your solution to your business’ specific requirements today, and ensure it’s flexible enough to grow or change in the future.
By Jamshid Rezaei
For more information contact us.
Cloud computing is no longer the “niche” or novel area it was once considered. Even if your organization isn’t embracing cloud computing internally, you are probably using a cloud-based platform for a CRM or relying on websites powered on Amazon Web Services (AWS), which claims 42% of the cloud market by revenue.
But despite cloud’s ubiquity, DevOps, and engineering teams are often unsure if their cloud environment is secure. In an Intel Security survey, only one-third of respondents expressed confidence that their senior management had a grip on cloud security.
Cloud computing – especially AWS – is here to stay, so how can your organization ensure its security? Ask your IT team the following three questions if you want to ease your – and your employee’s – minds.
The most important thing to realize about AWS security is it’s a question of configuration . Picture it like IKEA furniture – all the components are there for you to have a chair, but if you assemble it incorrectly, you could end up with something more like a lopsided table. The same applies to your cloud infrastructure.
Question #1: How are we monitoring security events in our cloud environment?
Ideally, your team will already be doing this with built-in tools offered by the service provider. However, from experience, this is not always the case. Your team should be automating anomaly detection from logs and your team should be alerted when anything out of the ordinary happens.
There are plenty of tools that exist for this purpose. AWS itself offers the more advanced tool CloudTrail (and Cloud Watch as well which can be used to monitor event like user account activity); your team should be using both.
Question #2: How are we securing our keys?
Disturbingly, not everyone can answer this question – the truth is keys are often poorly managed. The keys in question are what permit at access to your cloud infrastructure. Logically, the production keys should be well-guarded – and not left exposed in source code, shared with the entire development team, or given freely to contractors (all common situations).
Amazon provides Key Management Services (KMS) – the tools are at your team’s fingertips. However, if they aren’t used correctly, they can’t be effective. (You can also use a third-party service like Vault, which offers similar functionality, suitable for everything from employee credential storage to data encryption).
Question #3: How are you securely storing sensitive data?
Data storage can easily turn into a case of “who’s on first.” Often, the permissions are awry. There’s no reason there should be copies of production data in development systems. Your production environment should be on lock whereas your development environment should be a “playground” for the engineering teams.
Who has access and what they have access to should be closely controlled when it comes to production data; following the principle of least privilege and performing regular reviews is best practice here.
Security by Design – Not by AccidentThis is a concept that security professionals often beat people over the head with, but for a reason. Insecure AWS configurations suffer from this problem (as recently seen in the Verizon breach that left 14 million customer records exposed). These configurations are left in poor shape because security was not a focus from the start as well as the fact that there probably was some confusion surrounding the actual technology. Each cloud-based platform is different, and it’s important for your engineering team to thoroughly understand what can go wrong.
When your team is configuring the AWS environment, following the OWASP Secure by Design principles – specifically the previously mentioned principle of least privilege and separation of duties. AWS has its own Security by Design documentation available as well. Most security missteps are easy to avoid with some careful forethought.
Maintenance is key, too. It’s best to perform security design reviews before you deploy your environment, too. Assessing your infrastructure’s security regularly going forward is key, too.
AWS comes with pitfalls (and enough jargon to fill its own dictionary), but if designed and implemented properly, security will be more achievable. And instead of a lopsided table, you’ll have a chair.
Christie Terrill , CONTRIBUTOR
For more information and quotes contact us