Whether you’re an office worker or a busy person at home managing bills and scheduling, you likely rely on a computer to get things done. Even in the "age of mobile", computers are essential, and many people are put in a bind if their computer suddenly dies.
To protect these devices, it’s important to understand some of the common (yet surprising) ways you can fry your computer:
Lightning strike surges.
A lightning bolt contains a billion joules of energy, which creates a powerful surge that can easily overpower electronic devices. And direct hits open often. A man in Rochester, New York was recently struck by lightning while working at his office, and the bolt damaged multiple computers and systems.
When lightning hits a building it goes through the entire structure, and in the best-case scenario it will travel through a grounding rod which pushes the energy safely into the ground. While the current rushes through the building, it goes through walls, sockets, and any device that is currently plugged in. Even if a computer is protected by a high-end surge protector, a direct lightning strike can destroy it in an instant. The only way to prevent such damage is to unplug the computer and power it by battery, so it’s separated entirely from power sources.
Keeping the computer "on" all the time. Many busy professionals will leave work for the day and keep their computer on so it’s readily accessible in the morning. Or home users keep it on all day and night so it’s convenient for everyone in the family. Despite the savings of a few minutes, leaving the computer on 24/7 can shorten its lifespan. A computer in sleep mode is still idling and the cooling system still needs to operate continuously to avoid overheating. These parts can fail over time, and a user that turns off the computer for twelve or more hours a day can greatly extend the usable life of the components.
Rebooting the machine also gives the computer the chance to clear memory fragments and process updates. Ignoring such updates can expose the computer to malware and/or prevent it from receiving an important patch that might save the computer from the "blue screen of death."
Diving into advanced settings. Attempting to boost or alter the machine’s performance is a good idea, but only if you take the right actions. Running defragmentation or virus scans are simple and worthwhile tasks, but you have to tread carefully when it comes to "advanced settings." Some users might try to adjust the computer’s BIOS settings. These settings tell the computer what steps should be taken right after booting. The problem with making these types of changes is it can introduce undesired consequences, for example the OS might not load properly. And with the typical cost of computers today, fixing the problem with professional help will be close the cost of a new machine.
Dust and ventilation issues. Desktop and laptop computers need surrounding free space so they can properly ventilate out heat. Avoid using your laptop on top of a fluffy blanket, as this can insulate the heat and easily clog the vents. Laptops are especially prone to heat problems as they’re often used in non-desk settings, so always provide some space around the laptop to prevent a crash. There are several laptop cooling pads on the market which can stop overheating so you can use the laptop for hours without risk.
Dust is another issue, as it can easily clog onto the exhaust fan. Dust acts as an insulation layer for components that need to shed heat, so it’s worth opening up the machine’s case every few months to clear away dust, especially if the computer is used within a dust-prone environment. Use a can of compressed air to safely clean the machine.
When treated carefully, desktop computers and laptops can last for several years. Extending the life of these machines helps companies and individuals to get the most "cost per year" value, and prevents the possibility of a crash and losing valuable data.
By David Zimmerman
For Information and assessments contact us.
Keeping your network in good shape can be a headache, especially after you decide to allow Voice-over-IP (VoIP) calls on your network. Here's how to prepare your network for VoIP.
If your small to midsize business (SMB) has decided to make the shift from landline phones to a business Voice-over-IP (VoIP) service, then you'll want to be aware of several key networking challenges that VoIP newbies face. In some cases, switching to VoIP requires an entire office restructuring, a different approach to using wireless internet, or a trip to the store to purchase more Ethernet cables.
To help you anticipate and prepare for these networking issues, I spoke with Curtis Peterson, Senior Vice President of Cloud Operations at cloud-based business phone system provider RingCentral. We discussed some of the obstacles Peterson witnesses when helping companies move to RingCentral products. Keep in mind: Some of the terminology and phrasing you'll read in this article may sound confusing, which is why companies such as RingCentral offer guided installation services to smaller organizations. If you've got networking expertise in-house, then you'll be able to manage most of these issues on your own. However, if you don't know the difference between WI-FI and dial-up service, well, then your vendor will work with you to get you set up pronto.
Before we get into networking specifics, you'll have to determine the devices on which you'll let your employees make VoIP calls. You can purchase dedicated VoIP phones that let employees make and receive calls from their desk. You can also make VoIP calls directly from a computer without ever touching an actual phone. To piggyback off that technique, you can also make VoIP calls from smartphones. Determine which, if not all, of these endpoints you'll be using immediately. "Before the network requires more thought, determine that," advised Peterson.
2. Buy WiresThis is a no-brainer but, now that you're making the switch to VoIP, you'll need enough Ethernet cables to connect your devices to the internet. Additionally, you'll need to purchase the right Ethernet cables. Peterson recommends buying Cat 6 cables if you can afford them. These cables can typically support 10 Gigabit Ethernet (10GbE) at 250 MHz for up to 328 feet. You can get 1,000 feet for anywhere from $90 to $170. If you can't afford Cat 6, then Peterson recommends you use Cat 5e cables, which can support up to 100-MHz bandwidth. Peterson discourages his clients from using older Cat 3 cables, which he said presents a "troubleshooting nightmare."
3. Choose a Power SupplyThe easiest way to ensure that you're getting power to your VoIP phones is by distributing Power over Ethernet (PoE) cables. PoE lets devices that aren't plugged into AC sources pull in juice from your internet. Companies use PoE for surveillance cameras, ceiling-mounted access points, and even LED lights. If your Ethernet switch doesn't allow for PoE, then you can order a PoE injector, which is an additional power source that can be used alongside non-PoE switches.
4. Manage Internet Traffic With a Dedicated VLAN
Building your network via a dedicated Virtual Local Area Network (VLAN) lets you better distribute network traffic to ensure that voice and video calls don't get dropped when someone starts downloading a large file onto their computer. If you dedicate your VLAN only to phone and video traffic, then you'll be able to isolate and manage VoIP traffic without having to worry about tertiary traffic.
5. Manage Wireless Traffic With Access Point Handoff"Traditional Wi-Fi networks are usually a small managed system designed for laptops and tablets, and not for voice and video," said Peterson. Because of this discrepancy, it's important that you analyze your network to determine how many simultaneous calls your wireless connection can manage. Peterson recommends managed Wi-Fi that supports access point (AP) handoff for when one network becomes overburdened. He also suggests a system that is set for smaller packet sizes as well as an on-premises or cloud-based controller that can manually control access points when necessary.
6. Test Your FirewallsPeterson suggests taking a vendor's maximum published throughput with a grain of salt. "This is not enough of a benchmark for how much media you can drive through a firewall," he explained. If you don't have someone in your organization who can help you determine the difference between media and data traffic, then contact a professional. Peterson recommends using software-defined firewalls, which are designed to filter internal data traffic and packets rather than just data traffic.
7. Doublecheck Your RouterDetermine if your router has Packets Per Second (PPS) capability. This functionality provides traffic shaping and policing, which lets you prioritize voice and video data on your network. "What we look for is basically assuming one out of every five people will be on a 1-megabits-per-second [Mbps] voice call, and one out of every 7 will be on a [video] conference at 100 megabits per second," he said. Multiply the number of voice users at your company who will be on a voice call and a video call at any given moment, and then multiple that number by a minimum of five. That's how many Mbps of traffic your router should be able to manage without any issue.
By Juan Martinez
For more Information on how to set up a VoIP system and Voice Service Quotes Contact us.
VOIP (or Voice over IP) is a service which allows you to make phone calls and get in touch with one another through the internet, but without any of the hassle which can be involved when using traditional telephone. Instead of talking through traditional telephony networks, your voice gets sent through to the internet to the other user and you have a much more convenient call experience – whilst also giving you great savings on your communication.
How do I pick one?
The problem is that there are so many VOIP services available on the market, and a whole host of providers trying to pedal deals, all claiming that theirs is the best. It can be really difficult an confusing trying to wade through all of the sales talk and pick one that is going to suit your needs. That is where reviews sites come in- they offer you an indepth, full lowdown on everything from pricing, usability, quality, support, features, and of course, both the pros and the cons. Doing your homework by using these sites before making a purchase is really important and should be a vital step of the purchasing process.
Business or personal?
Although VOIP can be useful for personal use, it really shines when it is incorporated into a business and its intercommunication needs. It can be cheaper, as it is channelled through to your regular internet and so won’t rack up extra charges if calling internationally. It is a great asset for companies that want to stay interconnected even though they may be internationally located, as there are no issues with connecting calls internationally. One manager in the office in New York can talk to a manager in the office in Shanghai and it wouldn’t cost any more than sending an email. That interconnectivity is great when taking into account any future business needs. When travelling, communicating with the home base isn’t a headache! It can be as easy as opening up the computer and calling up, instead of trying to find and finance a mobile phone and calls plan while abroad. This is extremely useful when setting up a new office or business, or just staying in touch to keep updated.
Any business that wants to get updates as quickly as possible benefits from using VOIP services and you should consider making the switch now. You don’t even need to use a computer – you can keep things traditional by using a VOIP-enabled telephone which is also connected to the internet. It’s great for keeping in touch with clients – as you can call normal phones even when using a broadband phone network! Overall, VOIP systems give a greater range of flexibility to a business and allow it to work in such a way that they can get the best out of their employees. Being able to communicate effectively gives that extra edge when getting work done.
For information about dozens of VoIP providers Contact us. Information is free!
We live in a choice-driven society powered by technology. Whether it’s cutting the cable cord and moving to à la carte television or ditching the concept of ownership and opting to take part in the sharing economy, people want to pick and choose what works best for their individual needs instead of relying on a single provider or product.
In the case of cloud computing, a plethora of options is available to IT network teams for how they use infrastructure in private clouds, public clouds, and on-premise data centers and connect enterprise applications running in these environments.
With cloud adoption no longer an “if” but a “how,” companies are focusing on creating the right mix of public and private clouds to maximize efficiency and minimize spending. According to RightScale's State of the Cloud Survey, 85% of enterprises in 2017 are now creating cloud strategies that include multiple clouds, of which 58% are planning to implement hybrid cloud environments.
While many benefits exist to utilizing both private and public clouds, there are some challenges to the hybrid cloud environment that need to be addressed to ensure security and efficiency.
Handling Encrypted Traffic
Understanding the types of security threats (many of which revolve around access) helps address them properly. The rise in hybrid clouds has also resulted in an increase in the amount of encrypted traffic that application networking solutions need to address. Load balancers, which are often tasked with decrypting incoming traffic, need to support the latest encryption protocols and ciphers such as elliptic curve cryptography (ECC). They also need to scale horizontally and support granular per-app services to address the unique needs to each application.
The Risk Of Retrofitting Network Design
Building a hybrid cloud network requires a strategic plan to ensure successful integrations between public and private cloud services, as well as any on-premise applications and data. In today’s reality, many companies find themselves creating a plan after they’ve already started utilizing different providers and clouds. Though it’s not unusual, there are factors to consider to address efficiency and constraints. For example, it’s important to know what applications can be successfully migrated and run in which environments and where a particular set of data is allowed to reside.
Capacity Planning In The Cloud Era
According to the State of the Cloud Survey, the top initiative in 2017 for the majority of all cloud users is optimizing costs. While public clouds can appear to be a low-cost option, the price can jump quickly with heavy usage, especially when utilizing a dynamic cloud that continues to scale as needed. On the flip side, setting up a private cloud is not an insignificant endeavor, given the physical hardware environments needed. One method to curtail costs, which comes up in my conversations with Avi Networks customers, is to look at ways to minimize investment in hardware load balancers where software alternatives are available.
Over the past few years, I have had conversations with network administrators and architects at several large enterprises and found that hardware load balancers are significantly over-provisioned, to the tune of about 80%. One hapless network administrator at a large online retailer confided in me by saying, “I would rather run at 20% capacity for the better part of the year than be caught without being able to handle sudden traffic bursts. The potential loss of business and lack of peace of mind is just not worth it.” Software load balancers architected for cloud-native applications are finally making it possible to expand or shrink load-balancing capacity dynamically in response to real-time traffic needs.
To make the shift to a hybrid cloud computing environment, a few key best practices will help ensure success:
1. Automate Services
Hybrid clouds are designed to thrive on automation. For example, using a next-generation load-balancing solution allows predictive app auto-scaling. Such systems are analytics-driven and can automatically recognize changing traffic patterns in real time and spin up additional instances without human intervention. This end-to-end automation across the environment is made possible when a hybrid cloud traffic management system is in place.
IT services teams can build this self-service infrastructure to not only optimize computing resources and provisions on the fly but also to shift workloads as needed. These types of capabilities provide the agility that hybrid clouds promise with built-in elasticity, responsiveness and efficiency.
Managing cloud services with multiple providers or across environments doesn’t need to be challenging; network teams simply need a single and central point of management across all environments, no matter where applications are running. Because public and private cloud infrastructures operate independently, it’s critical to use technology that provides portability of data and applications between clouds.
For example, when it comes to application networking services, software load balancers that combine central management together with per-app delivery services enable a high degree of customization and flexibility. The alternative of deploying an expensive, monolithic hardware load balancer in front of multiple applications creates problems when each application needs to be maintained or updated, causing downtime for others.
3. Use Vendor-Agnostic Services
Now is the time to take advantage of the healthy competition that is brewing between cloud providers to avoid getting locked into a single cloud provider. Because not all cloud providers deliver consistent services, it behooves companies to remain nimble and test out different services to find the ones that work best.
By keeping the marketplace open and utilizing different providers, companies can take advantage of the myriad options to lower costs and increase performance, especially when they build a hybrid cloud that utilizes the best-of-breed from private and public clouds.
Hybrid cloud computing is growing in popularity because it offers companies flexibility, scalability and agility. To capitalize on this environment, IT teams must spend time creating a strategy that matches their organizational requirements. Putting private and public clouds together requires automation tools and management capabilities to make a system efficient and cost-effective over the long haul.By Ranga Rajagopalan
For more information and quotes contact us.
The cloud is increasingly a part of business, and any failure in distributed infrastructures could result in a potentially costly downtime.
Cloud computing is a reality that most businesses today are facing. While there are still holdouts — especially businesses that have security and data sovereignty issues — the cloud will be prevalent to practically all businesses in the mid-term. In fact, if the early nineties and aughties were all about having an online presence as the minimum requirement for brands, then the next five years are all about businesses completing their cloud migration.
Gartner estimates that by 2022, businesses would have already shunned their corporate “no cloud” policies and thus embrace the benefits of cloud platforms, amid some potential risks.
Of course, the benefits outweigh the potential risks: shorter time-to-market, lower infrastructure and storage costs, greater agility in using IT resources, and the ability to optimize the use of infrastructure.
However, there is also a potential downside. Given that your business does not have 100 percent control over the infrastructure when you are deploying apps and services over a cloud provider, then you might be worried about leaving your business assets and reliability to the hands of a third party.
Significant infrastructure downtime is among a business’ worst nightmares, as it can mean losses in terms of sales, productivity, and customer trust. Other concerns include security breaches, software issues, or even human errors — all of these can lead to tangible costs with monetary value.
What’s important is for a business to ensure it has adequate redundancies and safeguards in place, which can help mitigate the potentially damaging effects of such risks and threats.
In this article, we will discuss the best practices that can help ensure the reliability of your cloud-based systems, and that can help ensuring the integrity of your service in the event of a downtime. These particularly involve Disaster Recovery (DR) solutions, as well as Business Continuity (BC). Together, BCDR means your system can bounce back from any eventuality, which can involve downtime, data loss, data breaches, and similar cloud catastrophes.
Disaster Recovery as a Service
With the emergence of the cloud as the preferred infrastructure for businesses, the need for services that give assurance of data integrity has also risen. This has brought Disaster Recovery as a service or DRaaS to light, and providers of all sizes are now offering their own DRaaS solutions.
Both AWS and Azure, for example, provide DRaaS services on their respective cloud infrastructures, which ensure that businesses running their systems on the cloud can have faster disaster recovery capabilities without the expense of deploying systems on second, third, or additional sites.
Independent providers also offer similar services, such as IBM, Idealstor, nScape, and the like. Some of these solutions specifically target cloud users, although these services can also provide an added layer of assurance for businesses that run their systems on on-premises deployments.
Not all DRaaS options are equal, however. As a business, you will need to take these following matters into consideration, in ensuring your DR capabilities are at par with today’s standards.
One disadvantage of the legacy approach to disaster management is that these are mostly manual. If you can remember the tape backups of olden times, or even making regular off-site backups, these are labor-intensive, and require some lead time before business continuity systems kick in.
The advantage of modern BCDR solutions is that these will make regular backups and redundancies of your system, without added human intervention. And when such a disaster or downtime strikes, the redundancies in place will automatically bring the system back up to speed, likewise without human intervention.
One area where most IT managers have concerns with is the ease by which they can manage their BCDR deployments. While this can be more easily done on pure-play cloud settings, it can be a different matter altogether when it comes to hybrid cloud deployments or even on-premises deployments that utilize cloud-based DRaaS.
For this purpose, a good solution will involve unified management across both cloud and on-prem deployments, to ensure that IT management can have better visibility over the backups, redundancies, and protocols in place. Solutions like Azure DRaaS promise just this kind of efficiency, given its legacy capabilities in Windows servers, as well as virtualization in hybrid cloud environments.
Another area that IT managers should watch out for is whether one’s DRaaS provider offers the ability to test the system on a regular basis. This means having the ability to simulate failures in a controlled environment, so that you know how well you can bounce back, how short the time-to-recovery is, and whether there is any manual intervention required when such an eventuality arises.
You can expect legacy solutions to require some manpower when doing such tests, but a modern DRaaS solution should provide some level of automation, so that you can keep poking and prodding your system for potential loopholes.
Actual post-failure capabilities
Now, this is the biggest test of your DRaaS deployment. Understandably, no business wants any infrastructure failure, but in the event that a disaster hits, it pays to be protected, or at least capable to bounce back. When such a disaster occurs, you will need to evaluate your BCDR provider, whether they are able to deliver as promised, whether your system can run fully on backups, and how quickly the actual time-to-recovery will be. Your BCDR provider should have the adequate agility and flexibility to address any extended downtimes and ensure fastest recovery times.
A final word
Businesses should not live in constant fear of system failures, but it is a reality of life that IT managers should be aware of. What’s important is that you should not live in fear wondering when an outage will occur. Instead, through BCDR solutions, you should be able to anticipate any potential system issues, which then lets you shift your time and resources to core business activities.
by Daan Pepijn
For more information and quotes contact us.
When you shop for a new computer or laptop, one important feature to look for is the type and number of ports. Ports are docking points that connect external devices, wired connections and more. With the many types of ports and versions of each port available, it can be hard to know what to look for.
To help you find the right computer for your business, here are some of the most common types of ports and what they do.
USB Type AThis is the most common USB connector found on computers. Often simply referred to as a USB port, it is a universal port that can connect everything from external drives to peripherals.
USB Type B
A less common type of USB, a USB Type B port connects docking stations and printers.
USB Type C
The newest type of USB port, the USB Type C is predicted to replace other types of USBs. It is the slimmest version – thus fitting in slimmer laptops and smaller computers – and is reversible, so the connector fits both ways. The USB Type C supports different types of connections, including displays and chargers.
A USB 3 offers high-speed transfers, such as between external drives and computers. It has a maximum transfer rate of 5 Gbps, making it a decent option for transferring files over a wired connection.
A microSD slot reads microSD memory cards that contain external storage. SD card reader This slot reads SD cards from digital cameras. It is also known as a 3-in-1, 4-in-1 and 5-in-1 card reader.
Also known as a headphone jack, the audio jack connects your headphones and microphones to your computer. The 3.5 mm audio jack is the most common type of audio jack found in computers.
The Ethernet port connects your computer directly to local networks and the internet using a wired connection. Ethernet is the alternative when Wi-Fi is not available or when the Wi-Fi signal is poor.
The HDMI port connects your computer to TVs, projectors and other external monitors. The output resolution depends on your computer's graphics card. It also includes audio with video, so you don't need a separate audio connection.
Similar to HDMI, a DisplayPort connects your computer to an external monitor. It is the most advanced type of display connection, able to broadcast videos with a resolution of up to 4K and to accommodate multiple monitors in HD. The DisplayPort appears as its own connector or using a USB Type C port.
DVI is a more budget-friendly version of a DisplayPort. Also known as Dual Link, DVI is limited to an output of 1920 x 1200 resolution and needs a second connection to support a 4K monitor. DVI only appears on desktop computers, not laptops.
Thunderbolt 3 is the fastest connection for data transfers, at up to 40 Gbps. It can also connect multiple external monitors at 4K resolutions.
The VGA port connects your monitor to a computer's video card. It is one of the oldest and least powerful display ports, so you won't find it on many current computers.
By Sara Angeles
For more technical questions and quotes contact us.
A decade-old form of malicious software known as ransomware has been making headlines after cybercriminals hijacked hundreds of thousands of computers worldwide.
Ransomware, which is often transmitted by email or web pop-ups, involves locking up people’s data and threatening to destroy it if a ransom is not paid. The global cyberattack has affected 200,000 Windows computers in more than 150 countries, including China, Japan, South Korea, Germany and Britain.
The cybercriminals have generally targeted hospitals, academic institutions, blue-chip companies and businesses like movie theater chains. The attacks highlight the challenges that organizations face with consistently applying security safeguards on a large scale.
“Not only individuals, but even governments and big companies with so much to lose fail to secure their systems and train their employees about necessary security practices,” said Marty P. Kamden, a marketing executive for the private network service provider NordVPN. “Cautious online behavior would probably have prevented the malware from infecting the network in the first place.
What can businesses and individuals do to protect themselves from ransomware? Here are some tips from security experts.
Update your software
Security experts believe the malware that spurred this global attack, called WannaCry, may have initially infected machines by getting people to download it through email. After that, the malicious code was able to easily travel to a broader network of computers that were linked together through the Windows file-sharing system. (Users of Macs or other non-Windows computers were not affected.)
The most disheartening revelation from the cyberattack was that there was a fix available for the ransomware before the attack. Microsoft, which makes Windows, released a patch for the WannaCry vulnerability eight weeks ago, said Chris Wysopal, the chief technology officer of Veracode, an application security company.
In other words, if people had simply stayed on top of security updates, their machines would not have been infected. “People kind of got complacent and not vigilant about updating their machines,” Mr. Wysopal said.
Consumers can remedy this by configuring their Windows machines to automatically install the latest software updates.
Even though WannaCry specifically targeted Windows machines, that does not mean Mac or Linux users are off the hook in the future. Other breeds of malware may infect various operating systems, so no matter which device you are using, you should regularly update your software to install the latest security enhancements.
Install antivirus software
In addition to keeping Windows up-to-date with the latest security enhancements, antivirus software can prevent malware from infecting your computer. Mr. Kamden of NordVPN said 30 percent of popular antivirus systems were capable of detecting and neutralizing the ransomware.
Of course, with antivirus software, the same principle applies: Make sure to keep the antivirus app up-to-date, too, so it blocks the latest emerging malware. Also, download antivirus apps only from reputable vendors like Kaspersky Lab, Bitdefender or Malwarebytes, Mr. Kamden said.
Be wary of suspicious emails and pop-ups
Security experts believe WannaCry may have initially infected machines via email attachments. The lesson: Avoid clicking links inside dubious emails, Mr. Kamden said.
How do you spot a fishy email? Look carefully at the email address of the sender to see if it is coming from a legitimate address. Also, look for obvious typos and grammatical errors in the body. Hover over hyperlinks (without clicking on them) inside emails to see whether they direct you to suspicious web pages. If an email appears to have come from your bank, credit card company or internet service provider, keep in mind that they will never ask for sensitive information like your password or social security number.
In addition, ransomware developers often use pop-up windows that advertise software products that remove malware. Do not click on anything through these pop-ups, then safely close the windows.
Create backups of your data
In the event that a hacker successfully hijacks your computer, you could rescue yourself with a backup of your data stored somewhere, like on a physical hard drive. That way, if a hacker locked down your computer, you could simply erase all the data from the machine and restore it from the backup.
In general, you should be creating a copy of your data in the first place, in case your computer fails or is lost. To be extra safe from hackers, after backing up your data onto an external drive, unplug the drive from the computer and put it away.
Create a security plan for your business
For larger businesses with hundreds or thousands of employees, applying security updates organizationwide can be difficult. If one employee’s machine lacks the latest security software, it can infect other machines across the company network.
Mr. Wysopal said businesses could learn from how WannaCry spread through the Windows file-sharing system by developing a strict schedule for when computers companywide should automatically install the latest software updates. Businesses should determine the best time to apply these security updates to office computers without interrupting productivity, he added.
Information technology professionals should also regularly educate and test employees on spotting suspicious emails, said Matt Ahrens, vice president of Crypsis, a cybersecurity firm.
What to do if already infected
If you are already a victim of ransomware, the first thing to do is disconnect your computer from the internet so it does not infect other machines. Then report the crime to law enforcement and seek help from a technology professional who specializes in data recovery to see what your options might be. If there are none, don’t lose hope: There may be new security tools to unlock your files in the future.
In some extreme cases, it might make sense to pay a ransom if you have no backups and the encrypted files are valuable, Mr. Wysopal said. But he added that with WannaCry, people definitely should not pay the ransom. That’s because the hackers are apparently overloaded with requests from victims asking for their data to be released — and many who have paid the ransom are not hearing back. By BRIAN X. CHEN
Contact us for help if you suspect your are infected.
There are more reasons than ever to understand how to protect your personal information.
Major website hackings seem ever more frequent. Investigators believe that a set of top-secret National Security Agency hacking tools were offered to online bidders this summer.
And many of those worried about expanded government surveillance by the N.S.A. and other agencies have taken steps to secure their communications.
In a recent Medium post, Quincy Larson, the founder of Free Code Camp, an open-source community for learning to code, detailed the reasons it might be useful for people to make their personal data more difficult for attackers to obtain.
“When I use the term ‘attacker’ I mean anyone trying to access your data whom you haven’t given express permission to,” he wrote, “whether it’s a hacker, a corporation or even a government.”
In an interview, Mr. Larson walked us through some of the basic steps he recommended. We added a few of our own, based on additional interviews.
Now, let’s encrypt.
1. Download Signal, or Start Using WhatsApp to send text messages.
Encryption is a fancy computer-person word for scrambling your data so no one can understand what it says without a key. But encrypting is more complex than just switching a couple of letters around.
Mr. Larson said that by some estimates, with the default encryption scheme that Apple uses, “you’d have to have a supercomputer crunching day and night for years to be able to unlock a single computer.”
He said the best way to destroy data was not to delete it, because it could potentially be resurrected from a hard drive, but to encode it in “a secure form of cryptography.”
Signal is one of the most popular apps for those who want to protect their text messages. It is free and extremely easy to use. And unlike Apple’s iMessage, which is also encrypted, the code it uses to operate is open source.
“You can be sure by looking at the code that they’re not doing anything weird with your data,” Mr. Larson said.
“In general, the idea behind the app is to make privacy and communication as simple as possible,” said Moxie Marlinspike, the founder of Open Whisper Systems, the organization that developed Signal.
That means that the app allows you to use emojis, send pictures and enter group texts.
One bit of friction: You do have to persuade your friends to join the service, too, if you want to text them. The app makes that easy to do.
WhatsApp, the popular chat tool, uses Signal’s software to encrypt its messaging. And in Facebook Messenger and Google’s texting app Allo, you can turn on an option that encrypts your messages.
Mr. Marlinspike said the presidential election had spurred a lot of interest in Signal, leading to a “substantial increase in users.”
When asked to speculate why that was, Mr. Marlinspike simply said, “Donald Trump is about to be in control of the most powerful, invasive and least accountable surveillance apparatus in the world.”
Signal is available for both Android and iOS.
2. Protect your computer’s hard drive with FileVault or BitLocker.
Your phone may be the device that lives in your pocket, but Mr. Larson described the computer as the real gold mine for personal information.
Even if your data were password protected, someone who gained access to your computer “would have access to all your files if they were unencrypted.”
Luckily, both Apple and Windows offer means of automatic encryption that simply need to be turned on.
3. The way you handle your passwords is probably wrong and bad.
You know this by now. Changing your passwords frequently is one of the simplest things you can do to protect yourself from digital invasion.
By JONAH ENGEL BROMWICH
Please contact us for more information.
How do I know if my computer updated itself to the new Creators version of Windows 10 that just came out?
Windows 10 Creators Update is also known as Version 1703. You can check in the About section of the System settings to see if you have it. CreditThe New York TimesA. Microsoft’s recently released Creators Update for Windows 10 is also known as Version 1703. You can see what version number is currently running on your PC by pressing the Windows and I keys to open the Settings app (or choosing the Settings app from the Start menu) and selecting the System icon.
At the bottom of the list on the left side of the System Settings box, choose About. Here, you can see the edition of Windows 10 installed on the computer (like Windows 10 Home or Windows 10 Pro), along with the version number and other technical information. If you see 1703 listed as the version number, your computer has updated itself to the Creators Update.
Last month’s upgrade to Windows 10 was Microsoft’s most recent revision of its Windows 10 operating system, arriving less than a year after the Anniversary Update (Version 1607) in August 2016. The Creators Update includes several new features like a 3-D revamp of the Paint program. Another way to see if you have the latest version of Windows 10 is to check your Apps list for the new Paint 3D program. The Creators Update also brings improvements to the Microsoft Edge browser and enhancements designed for video game players.However, as with any major system update, bugs are bound to surface. On Microsoft’s own online forums, some users have reported problems with Bluetooth and internet connectivity, computer-memory issues, Dolby sound failures, crashing apps and other woes. System patches and workarounds will hopefully fix these issues in the near future.
If your computer has already updated itself to the new version, you may want to explore the Settings app a bit more to make sure you have the new operating system configured the way you want it. For example, open the Privacy icon on the main Settings screen to confirm the amount of personal data you want to share with Microsoft.
By J. D. BIERSDORFER