MakeTechEasier: Hostnames are an important piece of the Linux networking puzzle.
EnterprisersProject: 7 clues DevOps job candidates are more hype than substance
Apparently, Google's Chrome OS team is working on redesigning the Files app of the Linux-based Chrome OS operating system for Chromebooks with a new "My Files" section
Librapay, a donation-platform aiming to fund open source software and free culture, is in trouble.
Linux.com: Clear Linux was designed specifically for the cloud while best leveraging Intel hardware;
OMGUbuntu: Memory boost will also help the machine run games better
LinuxInsider: Ascensio System SIA recently released its free office suite upgrade -- OnlyOffice Desktop Editors -- with a ribbon and tab interface plus numerous updated features.
LinuxJournal: The question of the earliest GCC compiler version to support for building the Linux kernel comes up periodically.
EnterpriseNetworkingPlanet: Startup emerges from stealth mode, with a network hardware agnostic OS and $15M in funding.
eWEEK - RSS Feed
eWeek - RSS Feed
Network World Networking
The EU General Data Protection Regulation, or GDPR, came into force on May 25. With every organization with customers and suppliers in the European Union now accountable for the way in which they handle or process personal data, much work has been done to ensure compliance by the deadline. As a result, all levels of a business are now concentrated on meeting the requirements of the new regulation, throwing the issue of data protection into focus like never before.
When you consider how big and complex IT networks have become in recent times, however, it has become almost impossible to detect just when and how a security breach or network failure might occur. Unsurprisingly, network security and information assurance are crucial to GDPR compliance, with the regulation stating that measures must be put in place to mitigate the risk associated with assuring information integrity and availability in the face of threats such as malicious code or distributed denial of service (DDoS) attacks.
It’s hard to remember a time when people thought Amazon was nuts for going into the cloud computing business, since it was so far removed from the company’s core ecommerce business. No one is laughing now.
It seems history could repeat itself. According to an article in The Information, Amazon is rumored to be targeting a new industry, albeit one dominated by a giant player and multiple healthy competitors — the network switching business. The move would put it in direct competition with Cisco, HPE, Juniper Networks, and Arista.
Tunnels for networking are not good. We see a real-life example taking place with the twelve Thai boys that were stuck at the end of a tunnel with a very narrow section under water preventing passage. The tunnel offered them only one way out, and the particular path was not passable. This is what happens in networks. We’re thankful for the heroic rescue of these brave boys, but networks don’t always fare as well.
You will hear others speak about how a tunnel-based virtual network is the next amazing trend in networking. In fact, an analyst recently told me tunnels are great. And they are, when used for the purpose they were intended. But, using tunnels to get aggregates of packets to go where they wouldn’t go otherwise is dangerous, and will lead to the accumulation of technical debts.
An infrastructure design consideration that arguably frustrates users, and creates a never-ending headache for network administrators, is the quality of Wi-Fi service in a building. Typically, a poor user experience is one where users have either no signal on their wireless device or see “full bars” but cannot connect to the network.
In an office environment poor Wi-Fi performance is undoubtedly an annoyance, but in a hospital, it could prevent medical staff from delivering care in a timely manner. Waiting for a mobile terminal to retrieve the medical history of a seriously ill patient can literally be a matter of life and death.
Proper cabling is the foundation of Wi-Fi performance
Configuring a wireless access point system (AP) is a complex project and is not the subject of this post, although Aps or AP systems of course plays an important role in Wi-Fi network best practice. To provide network integrators with the best chances of success, the cabling infrastructure must be available to support optimal installation and placement of AP.
Extreme Networks is contending for greater influence from the data center to the network edge, but it has some obstacles to overcome.
The company is still grappling with how to best integrate, use and effectively sell the technologies it has acquired from Avaya and Brocade in the past year, as well as incorporate and develop its own products to do battle in the cloud, mobile and edge computing environments of the future. Remember, too, that Extreme bought wireless player Zebra Technologies in 2016 for $55 million.
In terms of results that Wall Street watches, Extreme Networks grew revenue 76% to $262 million in its recent fiscal third quarter. According to Extreme, those gains were fueled mostly by growth from its acquisitions and around an 8% growth in its own products.
As networks become more software-driven, they generate vastly greater amounts of data, which provides some challenges: adhering to compliance and customer privacy guidelines, while harvesting the massive amounts of data—it is physically impossible for humans to tackle the sheer volume that is created. But the vast amounts of data also provide an opportunity for businesses: leveraging analytics and machine learning to gather insights that can help network management move from reactive to proactive to assurance. This doesn’t just mean a massive shift in technology because the human element won’t simply go away. Instead, by combining human intellect and creativity with the computing power AI offers, innovative design and management techniques will be developed to build self-improving intelligent algorithms. The algorithms allow networks to operate in a way that far outweighs networks of the past.
A 75-mile, quantum-secured, high-speed fiber link has been built in the United Kingdom, the largest internet supplier there has said.
Particles of light, known as photons, carry encryption keys over the same connection as data. Hijacking those photons within the link immediately notifies the system that the keys have become bad — the thief interfering with those keys alters them and then they can’t be used by the interceptor — and the traffic becomes garbled instantly.
It’s “virtually un-hackable,” said Gavin Patterson, outgoing BT chief executive, announcing the link at Internet of Things World Europe that I attended in London last month.
A key component of SD-WAN is its ability to secure unreliable Internet links and identify anomalous traffic flows.
What keeps me awake at night is the thought of artificial intelligence lying in wait in the hands of bad actors. Artificial intelligence combined with the powers of IoT-based attacks will create an environment tapped for mayhem. It is easy to write about, but it is hard for security professionals to combat. AI has more force, severity, and fatality which can change the face of a network and application in seconds.
When I think of the capabilities artificial intelligence has in the world of cybersecurity I know that unless we prepare well we will be like Bambi walking in the woods. The time is now to prepare for the unknown. Security professionals must examine the classical defense mechanisms in place to determine if they can withstand an attack based on artificial intelligence.
Today’s threat landscape has led organizations to defend their networks with numerous point solutions, most of which are complex and require significant attention to operations and ongoing maintenance. While large enterprises often have sufficient skilled resources to support the security infrastructure, small- to medium-sized businesses sometimes struggle in this area.
For the SMB market in particular, Network Security-as-a-Service is an attractive offering. It allows companies to get the very best security technology at an affordable price point while having someone else maintain the complex infrastructure.
This has given rise to a genre of service provider that builds its own network backbone in the cloud and embeds network security as an integral service. More and more players are starting to offer this kind of service. They generally start with a global network backbone and software-defined wide-area networking (SD-WAN), add a full security stack, and connect to various cloud services from Amazon, Google, Microsoft, etc. Customers connect their data centers, branches, end users, and cloud apps to this network, and away they go. It’s networking, plus network security, all in one place, and all managed as a service.
The enterprise wide area networks are mission-critical resources for most enterprises. And when it came to managing and running the WAN, enterprises could choose between two distinct models: Do it Yourself (DIY) or managed WAN services. But with the evolution of SD-WANs, we’re seeing a new type of telco solution that merges elements of both capabilities.
Traditional WAN management models
With DIY, enterprise IT procures last-mile access at a location and deploys routers, WAN optimization, and network security appliances from several vendors. Continuous monitoring and management is done in house or via a managed service provider. In short, enterprise IT owns the complex task of maintaining, configuring and monitoring the WAN for availability and optimal performance.
Function-as-a-service (FaaS) technologies, including AWS Lambda, Azure Functions and IBM/Apache OpenWhisk, are experiencing mass adoption, even in private clouds, and it’s easy to see why. The promise of serverless is simple: developers and IT teams can stop worrying about their infrastructure, system software and network configuration altogether. There’s no need to load-balance, adjust resources for scale, monitor for network latency or CPU performance. Serverless computing can save you a lot of time, money and operational overhead, if you play your cards right.
Say goodbye to the idle instance
There’s also less waste with serverless computing. You only pay for infrastructure in the moment that code gets executed (or, each time a user processes a request). It’s the end of the server that just sits there. But with all these advantages, IT practitioners are also faced with an avalanche of complexity and new challenges.
You have probably heard all sorts of claims by various vendors and solutions that they are providing or supporting Intent-Based Networking (IBN), yet there is a wide range of capabilities that are all very confusing.
One way to make sense of this is to apply a "maturity model" like the one used to classify the maturity level of RESTful web services implementations. The Richardson Maturity Model divides capabilities of RESTful web services into levels, starting from 0 and going up as the maturity of the implementation increases. Just like IBN, REST had received its fair share of hype. While the REST principles were clearly defined in Roy Fielding’s dissertation, in practice the REST label was attached to implementations with wildly varying levels of conformance to the original principles, starting from anything that had the words “HTTP” and “JSON” in it to full blown “hypermedia as the engine of application state.”
Once the data center was home only to separate compute, storage and networking infrastructures. Sure, they communicated, but the disparate systems required dedicated management and hardware to care and feed for these heterogenous platforms.
We live in an exciting era for IT. Countless new technologies are changing how networks are built, how access is provided, how data is transmitted and stored, and much more. Cloud, IoT, edge computing and machine learning all offer unique opportunities for organizations to digitally transform the way they conduct business. Different as these technologies are, they are unified by their dependence on a properly functioning network, on what might be called “network continuity.” The key component for achieving network continuity is visibility.
It’s no secret that new and emerging technologies have always driven networking best practices. With such a wide range of business objectives and activities relying on IT, network performance really is a life or death issue for most companies. So, it’s critical that we maintain a firm grasp on the latest industry trends in order to make informed, strategic network management decisions.
Looking to seriously amplify the use of fog computing, the IEEE has defined a standard that will lay the official groundwork to ensure that devices, sensors, monitors, and services are interoperable and will work together to process the seemingly boundless data streams that will come from IoT, 5G and artificial intelligence (AI) systems.
The standard, known as IEEE 1934, was largely developed over the past two years by the OpenFog Consortium, which includes ARM, Cisco, Dell, Intel, Microsoft, and Princeton University.
If you think you know the problems facing the Internet of Things (IoT), a new Deloitte report, Five vectors of progress in the Internet of Things, offers a great chance to check your assumptions against the IoT experts.
Despite the fancy-pants “vectors of progress” language, the report’s authors — David Schatsky, Jonathan Camhi, and Sourabh Bumb — basically lay out the IoT’s chief technical challenges and then look at what’s being done to address them. Some of the five are relatively well-known, but others may surprise you.
Frankly (no pun intended), I have to admit that I’m growing increasingly frustrated with certain trends in networking.
For example, it’s not that I don’t like the dream or idea of software-defined networking (SDN) — it’s not that I don’t think it’s superior to the older way of setting up or monitoring a network. It’s just that I’m becoming increasingly concerned that small- to medium-size enterprises (SMEs) won’t be able to keep up. And the media that follows this trend isn’t really brining to light the extreme cost of some of these systems.
Pricewise, many of the product lines are intended for large networks. There's no way that a smaller company could even begin to afford them. For example, one trainer told me that a certain SDN product was scaled to start at 500 site deployments!!
Like any industry, networking has a proprietary slew of acronyms and jargon that only insiders understand. Look no further than Network World’s searchable glossary of wireless terms.
Turns out, multiplexing has nothing to do with going to the movies at a place with more than one theater.
I also like to think that each networker has their own favorite list of terms, ready to share at a moment’s notice during family dinners, holidays and networking events … or maybe that’s just me?
The data-center network is a critical component of enterprise IT’s strategy to create private and hybrid-cloud architectures. It is software that must deliver improved automation, agility, security and analytics to the data center network. It should allow for the seamless integration of enterprise-owned applications with public cloud services. Over time, leading edge software will enable the migration to intent-based data-center networks with full automation and rapid remediation of application-performance issues.