Edraw Max is an all-in-one diagram software. This included a Floor planner, including templates for garden design.
Apache Virtual Hosts allows multiple websites to run on one Web server. With virtual hosts, you can specify the site document root (the directory which contains the website files), create a separate security policy for each site, use different SSL certificates for each site and much more.
eWEEK: New tool aims to help organizations conduct penetration tests against Kubernetes container orchestration system clusters, to help identify and improve cyber-security issues.
Dropbear is a small and lightweight SSH server and client that can replace OpenSSH
Cloudgizer offers high performance, a small footprint, and safer, more productive programming in C.
LinuxUprising: GSConnect is a Gnome Shell extension that integrates your Android device(s) with the desktop.
OpenMeetings is an open source web-based application for presenting, online training, web conferencing, collaborative whiteboard drawing and document editing, and user desktop sharing.
castero is designed to be easy to use and targeted at users who want lightweight command line applications instead of bloated GUI-based alternatives.
MakeTechEasier: Ubuntu is good, but it can be bloated and not suitable for an old PC.
eWEEK - RSS Feed
eWeek - RSS Feed
Network World Networking
As enterprises endeavor to expand domestic and global footprints, agile network infrastructure connectivity across geographies continues to prove an ongoing challenge. In particular, ensuring that data shared over these networks is protected from unauthorized access is a primary directive in today’s evolving cyber threat landscape. These often-contradictory demands call for IT decision makers to invest in innovation that will facilitate network flexibility and agility without compromising security, productivity or performance.
This challenge begs a simple question. How can a WAN deliver the flexibility and agility necessary to help an organization grow without increasing exposure to data breaches and other security problems? After all, if the cost of convenience is increased network vulnerabilities, can it be considered a sound approach?
Hybrid IT networking has come a long way in the past decade, as enterprises have gradually come to embrace and trust cloud computing. Yet, despite the growing popularity of both private and public clouds, many enterprise IT teams are still struggling with how to handle the resulting migration challenges.
Originally envisioned as simply a way to reduce costs, migration to the cloud has escalated in large part due to a drive for greater agility and flexibility. In fact, according to a recent State of the Network global survey of more than 600 IT professionals, the top two reasons enterprises are moving to the cloud are to increase IT scalability and agility, and to improve service availability and reliability. The need to lower costs was ranked number four, tied with the desire to deliver new services faster.
At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.
In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.
Network administrators, IT managers and security professionals face a never-ending battle, constantly checking on what exactly is running on their networks and the vulnerabilities that lurk within. While there is a wealth of monitoring utilities available for network mapping and security auditing, nothing beats Nmap's combination of versatility and usability, making it the widely acknowledged de facto standard.
What is Nmap?
Nmap, short for Network Mapper, is a free, open-source tool for vulnerability scanning and network discovery. Network administrators use Nmap to identify what devices are running on their systems, discovering hosts that are available and the services they offer, finding open ports and detecting security risks.
Cisco’s strategy of diversifying into a more software-optimized business is paying off – literally.
The software differentiation was perhaps never more obvious than in its most recent set of year-end and fourth quarter results. (Cisco's 2018 fiscal year ended July 28.) Cisco said deferred revenue for the fiscal year was $19.7 billion, up 6 percent overall, “with deferred product revenue up 15 percent, driven largely by subscription-based and software offers, and deferred service revenue was up 1 percent.”
The portion of deferred product revenue that is related to recurring software and subscription offers increased 23 percent over 2017, Cisco stated. In addition, Cisco reported deferred revenue from software and subscriptions increasing 23 percent to $6.1 billion in the fourth quarter alone.
Domain Name System (DNS) is our root of trust and is one of the most critical components of the internet. It is a mission-critical service because if it goes down, a business’s web presence goes down.
DNS is a virtual database of names and numbers. It serves as the backbone for other services critical to organizations. This includes email, internet site access, voice over internet protocol (VoIP), and the management of files.
You hope that when you type a domain name that you are really going where you are supposed to go. DNS vulnerabilities do not get much attention until an actual attack occurs and makes the news. For example, in April 2018, public DNS servers that managed the domain for Myetherwallet were hijacked and customers were redirected to a phishing site. Many users reported losing funds out of their account, and this brought a lot of public attention to DNS vulnerabilities.
Fiber transmission could be more efficient, go farther, carry more traffic and be cheaper to implement if the work of scientists in Sweden and Estonia is successful.
In a recent demonstration, researchers at Chalmers University of Technology, Sweden, and Tallinn University of Technology, Estonia, used new, ultra-low-noise amplifiers to increase the normal fiber-optic transmission link range six-fold.
And in a separate experiment, researchers at DTU Fotonik, Technical University of Denmark used a unique frequency comb to push more than the total of all internet traffic down one solitary fiber link.
Fiber transmission limits
Signal noise and distortion have always been behind the limits to traditional (and pretty inefficient) fiber transmission. They’re the main reason data-send distance and capacity are restricted using the technology. Experts believe, however, that if the noise that’s found in the amplifiers used for gaining distance could be cleaned up and the signal distortion inherent in the fiber itself could be eliminated, fiber could become more efficient and less costly to implement.
Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.
DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6.
Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.
More than 120 million Microsoft Office accounts have moved from on-premises to the cloud since the launch of Microsoft Office 365. Many of those accounts belong to users in large enterprises that weren’t fully prepared for the transition. The fact is as many as 30 to 40 percent of enterprises struggle with some level of application performance as they make the shift to cloud.
Some of the signs of poor performance (and the source of users’ frustration) include Outlook responding slowly when the user tries to open messages, VoIP calls over Skype for Business having rough spots, and documents being slow to open, close and save in Word. Performance problems in the Office applications manifest in many other ways, as well.
The financial services industry is experiencing a period of dramatic change as a result of the growth in digitalization and its effect on customer behavior. In an emerging landscape made up of cryptocurrencies, frictionless trading, and consolidated marketplace lending, traditional banks have found themselves shaken by the introduction of new, disruptive, digitally-native and mobile-first brands.
With a reputation as being somewhat conservative and slow to innovate, many financial service providers are now modernizing and improving their systems, transforming their new business models and technologies in an effort to stay ahead of the more agile challengers snapping at their heels.
After nearly four years of slashing at each other in court with legal swords, Cisco and Arista have agreed to disagree, mostly.
To settle the litigation mêlée, Arista has agreed to pay Cisco $400 million, which will result in the dismissal of all pending district court and International Trade Commission litigation between the two companies.
For Arista the agreement should finally end any customer fear, uncertainty and doubt caused by the lawsuit. In fact Zacks Equity Research wrote the settlement is likely to immensely benefit Arista.
Network packet brokers (NPB) have played a key role in helping organizations manage their management and security tools. The tool space has exploded, and there is literally a tool for almost everything. Cybersecurity, probes, network performance management, forensics, application performance, and other tools have become highly specialized, causing companies to experience something called “tool sprawl” where connecting a large number of tools into the infrastructure creates a big complex mesh of connections.
Ideally, every tool would receive information from every network device, enabling it to have a complete view of what’s happening, who is accessing what, where they are coming in from, and when events occurred.
Cisco is moving rapidly toward its ultimate goal of making SD-WAN features ubiquitous across its communication products, promising to boost network performance and reliability of distributed branches and cloud services.
The company this week took a giant step that direction by adding Viptela SD-WAN technology to the IOS XE software that runs its core ISR/ASR routers. Over a million of ISR/ASR edge routers, such as the ISR models 1000, 4000 and ASR 5000 are in use by organizations worldwide.
The release of Cisco IOS XE provides an instant upgrade path for creating cloud-controlled SD-WAN fabrics to connect distributed offices, people, devices and applications operating on the installed base, wrote Anand Oswal, senior vice president of network engineering in a blog post about the upgrade.
For years it has been normal practice for organizations to store as much data as they can. More economical storage options combined with the hype around big data encouraged data hoarding, with the idea that value would be extracted at some point in the future.
With advances in data analysis many companies are now successfully mining their data for useful business insights, but the sheer volume of data being produced and the need to prepare it for analysis are prime reasons to reconsider your strategy. To balance cost and value it’s important to look beyond data hoarding and to find ways of processing and reducing the data you’re collecting.
If you’re reading this, you’ve got RF power. Power is a necessity for networking, allowing us to charge our batteries, connect millions of devices, communicate over long distances and keep our signals clear.
Don’t believe me? Kill the power and see what happens to your network.
But with great RF power comes great responsibility. Power management is the art and science of optimizing input and output signals to maximize the efficiency and performance of RF devices – and it’s no easy feat. Each networking device has its own unique power requirements. Higher data rates often mean more power consumption and complexity, which can introduce losses that reduce reliability and increase cost. Low data rate devices, such as those supporting the Internet of Things (IoT), draw very little power in order to conserve every millisecond of precious battery power.
It’s in our phones, TVs, toasters, cars, watches, toothbrushes – even in the soles of our shoes.
The internet is everywhere. Right?
Well, no. About 47 percent of the global population of 7.6 billion people doesn’t have internet access, as tough as that is for those of us in internet-rich locales to imagine. But companies are working on ways to bridge this digital divide, and systems based on low-earth-orbit (LEO) satellites are becoming a big part of the conversation.
The benefits of satellite internet are obvious in places where land-based network infrastructure doesn’t exist. But while systems based on high-orbit satellites need only minimal ground equipment to reach remote places, a range of complications – including cost, speed and performance – prevent them from being a global solution. LEO systems aim to get past the problems by getting closer to earth.
When most people encounter headlines about high-profile cloud outages, they think about the cloud vendor's name, or how the negative publicity might affect stock prices. I think about the people behind the scenes—the ones tasked with fixing the problem and getting customer systems back up and running.
Despite their best efforts, the occasional outage is inevitable. The internet is a volatile place, and nobody is completely immune to this danger. Fortunately, there are some straightforward steps businesses can take to guard against the possibility of unplanned downtime.
Here are four ways to avoid cloud outages while improving security and performance in the process:
When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.
I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.
There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi router traffic grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.
As distributed resources from wired, wireless, cloud and Internet of Things networks grow, the need for a more intelligent network edge is growing with it.
Network World’s 8th annual State of the Network survey shows the growing importance of edge networking, finding that 56% of respondents have plans for edge computing in their organizations.
Typically, edge networking entails sending data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center or infrastructure-as-a-service (IaaS) cloud.
Cisco today laid out $2.35 billion in cash and stock for network- identity, authentication and security company Duo.
According to Cisco, Duo helps protect organizations against cyber breaches through the company’s cloud-based software that verifies the identity of users and the health of their devices before granting access to applications with the idea of preventing breaches and account takeover.
A few particulars of the deal include: