Friday 7 November 2014

Have an iPad or iPhone? Don't plug it into public chargers


A new trojan-horse that specifically targets iOS and OS X users has emerged out of China.


Security firm Palo Alto Networks has warned of a new malware threat to Apple customers that can infect iPhones and iPads via USB.
WireLurker is significant as it is the first to be able to infect iOS devices like a traditional virus. Previous malware has required users to have jailbroken their device, meaning less than one per cent of users were at risk.
The point of origin appears to have been a third-party app store for Macs running OS X.

 

Self-replicating malware


WireLurker is able to auto-generate infected software, meaning removing the app that allowed the malware into the device isn’t enough to solve the issue.
At the time of writing, some 467 apps are believed to be infected, with more than 350,000 users downloading them. All are hosted on Chinese third-party app store Maiyadi.

 

Infection through USB


Those downloading apps via Maiyadi aren’t the only users at risk. Wirelurker is also able to gain access to iOS devices through being connected via USB to an infected Mac.
Once it has infected an iOS device, Wirelurker copies your phonebook and will read through any iMessages you have on your phone or tablet.
For the time being, Palo Alto is saying the best way to avoid becoming infected is to not connect your iOS devices into any unfamiliar devices and to avoid using non-Apple chargers. That includes the public chargers you get at airports.
“We are aware of malicious software available from a download site aimed at users in China, and we’ve blocked the identified apps to prevent them from launching,” an spokesperson for Apple told Engadget. “As always, we recommend that users download and install software from trusted sources.”

Source: http://www.t3.com

D-Link connects everything to everything else


Nervous home owners now have a new way to reduce their urban paranoia levels. The Mydlink Home system lets you monitor your mansion AND switch things on and off in it.

Hot on the heels of the Withings Home, here a slightly dowdier but no less high tech Home: the Mydlink Home from D-Link.

Comprising of a brace of a security cameras - the tilt-and-pannable Home Monitor 360 and the immobile Home Monitor HD - the self-explanatory Wi-Fi Motion Sensor, the Home Smart Plug and the Mydlink app, which receives Home gear alerts and lets you monitor and activate/deactivate electricals in your home.

Examples suggested by D-Link range from the sensible - a lamp that turns on via the Smart Plug when motion is detected by your Home Monitor camera - to the slightly mad - you can tune in to a Home Monitor camera to see if the iron is still on, and then deactivate it. Well, so long as you've put every square inch of your pad under Mydlink surveillance, and plugged your iron into a Smart Plug socket.

These products follow hot on the heels of the same sub-brand's Home Music Everywhere streamer. It means D-Link devices can now theoretically turn on just about everything in your house, though not, by and large, control things beyond that - that's where IFTTT will eventually come into its own, with kit springing into life when it detects it's been turned on.    

It's got to be said, these are notably less attractive than Withings' devices. Still, the proof is in the using with these things, really, and rest assured we'll have a review as soon as D-Link comes up with the goods. 

Pricing is as follows: Home Monitor HD £91, Home Monitor 360 £95, Home Smart Plug £41, Home Wi-Fi Motion Sensor £36.50, Home Music Everywhere $45.50

Source: http://www.t3.com 

The best new features in Windows Server 10 (so far)

new Microsoft Windows

The early look at Windows Server 10 reveals many expected enhancements and nice surprises

Alongside Microsoft's Oct. 1 release of the Windows 10 Technical Preview, the company offered early previews of the next iteration of Windows Server and System Center. With final releases not expected until the summer of 2015, these extremely early technical previews are a marked departure from the norm for Microsoft. Far from being feature complete or even stable, the Windows Server Technical Preview nevertheless presents a way to become familiar with new features coming down the pike, and to put the UI changes through their paces.
As you'd expect, Windows Server Technical Preview largely builds on virtualization, storage, networking, and management capabilities introduced with Windows Server 2012. But it also holds a few nice surprises. Here is a quick tour of the highlights -- for now. We're sure to see much more in the coming months.

 

Start menu and the UI


Debate over the switch from the Windows 7 Start menu to the Start screen in Windows 8 has been nonstop since day one, but if the Start screen proved to be a bad fit for laptops and workstations, it makes even less sense for servers. Fortunately the new Start menu isn’t limited to the Windows 10 client, but is also present in the Windows Server Technical Preview. While server users won’t benefit much from Windows 8-style live tiles, the new Start menu (accessed by clicking the Windows button) is unobtrusive and familiar.
The other big changes in the UI are focused on multitasking. First is support for virtual desktops (not to be confused with remote desktops), which can be used to group like applications into separate desktop instances. The ability to snap windows to the edges of the screen is also enhanced in the technical preview. Instead of simply splitting the screen in half like in Windows 7 and Windows 8, you can split the screen into quarters. This feature is clearly more beneficial to desktop users (hopefully most of your server management isn’t done from the console), but anything that makes an admin’s workflow smoother and more efficient is welcome.

windows 10 start menu
Like the old Windows 7 Start menu, the new Start menu in Windows Server 
Technical Preview offers fast access to all apps and files.

The command line and PowerShell


Thanks to PowerShell, more and more admins are driving their Windows servers from the command line. Microsoft is improving the experience there too. In current versions of Windows, selecting text or doing a simple copy and paste into the Windows command line is not only a pain, but can introduce line breaks, tabs, and inconsistent or unexpected characters. These inconsistencies are gone in the Windows Server Technical Preview. Now when you paste incompatible special characters such as slanted quotes into the command line, they are automatically cleaned up and converted into their command-line-safe equivalents.
Microsoft is aware that PowerShell is a major selling point of the Windows Server platform right now and is taking measures to ensure the whole experience is optimized and pain free. The Windows Server Technical Preview includes PowerShell 5, which is a significant release offering critical new features, as well as updates to features that have been around for a while. The biggest new feature in PowerShell 5 is OneGet, which brings package management capabilities to Windows.
Another major new area of improvement is the ability to manage network switches from within PowerShell, a nod to Microsoft’s efforts to leverage automation throughout the data center. Other PowerShell enhancements include updates to Desired State Configuration and the ability to natively manage zip archive files.
Like the old Windows 7 Start menu, the new Start menu in Windows Server Technical Preview offers fast access to all apps and files.

 

Windows Defender


Windows Defender, Microsoft’s free antimalware solution, was originally licensed only for home use, then integrated into the OS with Windows 8. The Windows Server Technical Preview includes Windows Defender natively, though the UI element is optional. Many corporate customers will likely prefer an enterprise antimalware solution, but there are clear benefits to having Windows Defender enabled natively. Having antimalware protection from the get-go is a big deal, and the ability to manage it through PowerShell is another notable win for system administrators.

windows 10 command prompt
In Windows 10 and Windows Server Technical Preview, you can enable 
Command Prompt property settings that make life much easier on the command line.

 

Hyper-V


Without a doubt, one of Microsoft’s most rapidly evolving platforms, Hyper-V continues to receive major attention in the Windows Server Technical Preview. The first new feature is the ability to perform a rolling upgrade to a Windows Server 2012 R2 Hyper-V cluster, upgrading cluster nodes to the Windows Server Technical Preview one by one. Once all nodes have been updated, the functional level of the entire cluster can then be upgraded to support a number of new Hyper-V features.
For starters, virtual machines running on Windows Server Technical Preview use a new configuration file format. The new format promises to be both more efficient (when reading and writing the data) and safer, preventing data corruption due to storage failure. Checkpoints for point-in-time snapshots are now supported in production workloads, due to the use of backup technology within the guest OS. Windows-based virtual machines will use the Volume Snapshot Service, while Linux VMs flush their file system buffers during checkpoint creation.
Hyper-V Manager receives some love in the Windows Server Technical Preview, gaining the use of WS-MAN, and the ability to access a different set of credentials to connect to a remote host. Additionally, virtual network adapters and memory are now treated as hot-swap capable, so it's easier to perform critical VM changes on the fly. Finally, virtual machines hosted in the Windows Server Technical Preview now support Connected Standby.

 

Storage enhancements


Windows Server 2012 introduced Storage Spaces, a method of pooling physical storage devices (hard drives or SSDs) into logical volumes in order to boost performance and reliability. Windows Server 2012 R2 added automated tiering, with pools of SSDs being used for the most frequently accessed data and spinning hard drives for less frequently used data. 
Two major features added in the Windows Server Technical Preview are aimed at common use cases for Windows Server-based storage. The first, Storage QoS (Quality of Service), leverages PowerShell and WMI (Windows Management Instrumentation) to build policies managing prioritization and performance of virtual hard disks. The second, Storage Replica, brings block-level replication to Windows Server. Storage Replica provides high availability and can even be used to build multisite, fail-over clusters. Between Storage QoS and Storage Replica, the Windows Server Technical Preview shows Microsoft is serious about making Windows Server a viable option for all of your storage needs.

 

Virtual networking


Windows Server 2012 introduced several new capabilities for building complex virtual networks and allowing clients to connect to their own isolated virtual network through the use of multitenant site-to-site VPN. This was pitched as a way for service providers to build their own cloud service on the Windows Server platform, but the configuration was complex and primarily handled within PowerShell. The Windows Server Technical Preview brings this functionality into a new server role called the Network Controller. The Network Controller role provides the ability to automate the configuration of networks both physical and virtual, as well as handle numerous other aspects of your networking environment.

windows 10 network controller
The Network Controller server role in Windows Server Technical Preview 
incorporates virtual networks, physical networks, and network services.

Identity and access management


Possibly one of the more exciting features coming to the next version of Windows Server is more control over the permissions provided to users with elevated rights. Microsoft has not said much publicly about the additional level of security, only that time-based access and more fine-grained permissions will be available. However, one could speculate that this will be based on PowerShell’s JEA (Just Enough Admin) feature set. JEA allows administrator access to be limited to specific PowerShell cmdlets, specific modules, or even certain parameters within a cmdlet.
Additionally, JEA is configured using a local administrator on the server, preventing network-level permissions from being cached on the server and potentially being used in a pass-the-hash attack. Regardless of how these features look and feel in the final product, they will be a welcome addition for IT shops.

 

MultiPoint Services


In conjunction with Remote Desktop Services, MultiPoint Services support multiple users logging into the same computer. Rather than requiring a thin client or additional hardware, MultiPoint Service clients are connected directly to the server using standard USB and video devices. This functionality was originally shipped as Windows MultiPoint Server 2012, a product aimed at schools that allows a teacher to manage what is shown on student displays. Now it comes along for the ride in Windows Server Technical Preview.

 

DNS Policies


An announced feature that is nowhere to be found in the current release of the technical preview, DNS Policies will presumably allow you to manage how and when your DNS server responds to client queries. Microsoft states that DNS responses can be configured based on time, the public IP of the DNS client performing the query, and other parameters. There are several scenarios in which this type of functionality could be useful, such as load balancing or custom responses based on geography. I imagine this having a similar feel to the policy-based DHCP functionality introduced in Windows Server 2012.

 

IP Address Management


IPAM (IP Address Management) was introduced in Windows Server 2012 as a way to monitor and manage DHCP and DNS services. The focus in both Windows Server 2012 and Windows Server 2012 R2 was clearly on DHCP and the IP address space. The Windows Server Technical Preview enhances existing functionality for DNS servers and your IP address space, but also allows you to manage DNS zones and resource records on both Active Directory-integrated and file-backed DNS servers.

 

Web Application Proxy


First appearing as a core Windows service in Windows Server 2012 R2, Web Application Proxy functions as a reverse proxy, allowing external clients to access Web applications internal to the corporate network. The Windows Server Technical Preview promises new capabilities in Web Application Proxy, including the ability to handle HTTP-to-HTTPS redirection and additional support for claims-based or integrated Windows authentication.

 

Windows Server next


Where is Windows Server Technical Preview taking us? Microsoft pitched Windows Server 2012 and Windows Server 2012 R2 as the basis for our private cloud. Major features introduced or substantially enhanced in Windows Server 2012 -- such as Hyper-V, Storage Spaces, IP Address Management, and multitenant site-to-site VPN -- were geared specifically to companies looking to gain efficiency through consolidation and automation. 
The Windows Server Technical Preview is a clear progression of this vision, as most of the features enumerated here bring something new to the table when it comes to building and managing a hybrid or private cloud.

Source: http://www.infoworld.com

7 free tools every network needs

group of construction tools blue toned copy space 000008809249

From device discovery to visibility into systems, networks, and traffic flows, these free open source monitoring tools have you covered

In the real estate world, the mantra is location, location, location. In the network and server administration world, the mantra is visibility, visibility, visibility. If you don't know what your network and servers are doing at every second of the day, you're flying blind. Sooner or later, you're going to meet with disaster.
Fortunately, many good tools, both commercial and open source, are available to shine much-needed light into your environment. Because good and free always beat good and costly, I've compiled a list of my favorite open source tools that prove their worth day in and day out in networks of any size. From network and server monitoring to trending, graphing, and even switch and router configuration backups, these utilities will see you through.

 

1) Cacti


First, there was MRTG. Back in the heady 1990s, Tobi Oetiker saw fit to write a simple graphing tool built on a round-robin database scheme that was perfectly suited to displaying router throughput. MRTG begat RRDTool, which is the self-contained round-robin database and graphing solution in use in a staggering number of open source tools today. Cacti is the current standard-bearer of open source network graphing, and it takes the original goals of MRTG to whole new levels.
Cacti is a LAMP application that provides a complete graphing framework for data of nearly every sort. In some of my more advanced installations of Cacti, I collect data on everything from fluid return temperatures in data center cooling units to free space on filer volumes to FLEXlm license utilization. If a device or service returns numeric data, it can probably be integrated into Cacti. There are templates to monitor a wide variety of devices, from Linux and Windows servers to Cisco routers and switches -- basically anything that speaks SNMP. There are also collections of contributed templates for an even greater array of hardware and software.
While Cacti's default collection method is SNMP, local Perl or PHP scripts can be used as well. The framework deftly separates data collection and graphing into discrete instances, so it's easy to rework and reorganize existing data into different displays. In addition, you can easily select specific timeframes and sections of graphs simply by clicking and dragging. In some of my installations, I have data going back several years, which proves invaluable when determining if current behavior of a network device or server is truly anomalous or, in fact, occurs regularly.

Cacti
From disk utilization to fan speeds in a power supply, if it can be monitored, Cacti can track it -- and make that data quickly available.

Using the PHP Network Weathermap plug-in for Cacti, you can easily create live network maps showing link utilization between network devices, complete with graphs that appear when you hover over a depiction of a network link. In many places where I've implemented Cacti, these maps wind up running 24/7 on 42-inch LCD monitors mounted high on the wall, providing the IT staff with at-a-glance updates on network utilization and link status.
Cacti is an extensive performance graphing and trending tool that can be used to track nearly any monitored metric that can be plotted on a graph. It's also infinitely customizable, which means it can get complex in places.

 

2) Nagios


Nagios is a mature network monitoring framework that's been in active development for many years. Written in C, it's almost everything that system and network administrators could ask for in a monitoring package. The Web GUI is fast and intuitive, and the back end is extremely robust.
As with Cacti, a very active community supports Nagios, and plug-ins exist for a massive array of hardware and software. From basic ping tests to integration with plug-ins like WebInject, you can constantly monitor the status of servers, services, network links, and basically anything that speaks IP. I use Nagios to monitor server disk space, RAM and CPU utilization, FLEXlm license utilization, server exhaust temperatures, and WAN and Internet link latency. It can be used to ensure that Web servers are not only answering HTTP queries, but that they're returning the expected pages and haven't been hijacked, for example.
Network and server monitoring is obviously incomplete without notifications. Nagios has a full email/SMS notification engine and an escalation layout that can be used to make intelligent decisions on who and when to notify, which can save plenty of sleep if used correctly. In addition, I’ve integrated Nagios notifications with Jabber, so the instant an exception is thrown, I get an IM from Nagios detailing the problem in addition to an SMS or email, depending on the escalation settings for that object. The Web GUI can be used to quickly suspend notifications or acknowledge problems when they occur, and it can even record notes entered by admins.

Nagios
Nagios can be a challenge for newcomers, but the rather complex configuration is also its strength, as it can be adapted to almost any monitoring task.

As if this wasn't enough, a mapping function displays all the monitored devices in a logical representation of their placement on the network, with color-coding to show problems as they occur.
The downside to Nagios is the configuration. The config is best done via command line and can present a significant learning curve for newbies, though folks who are comfortable with standard Linux/Unix config files will feel right at home. As with many tools, the capabilities of Nagios are immense, but the effort to take advantage of some of those capabilities is equally significant.
Don't let the complexity discourage you -- Nagios has saved my bacon more times than I can possibly recall. The benefits of the early-warning systems provided by this tool for so many different aspects of the network cannot be overstated. It's easily worth your time and effort.

 

3) Icinga


Icinga started out as a fork of Nagios, but has recently been rewritten as Icinga 2. Both versions are under active development and available today, and Icinga 1.x is backward-compatible with Nagios plug-ins and configurations. Icinga 2 has been developed to be smaller and sleeker, and it offers distributed monitoring and multithreading frameworks that aren’t present in Nagios or Icinga 1. You can migrate from Nagios to Icinga 1 and from Icinga 1 to Icinga 2.
Like Nagios, Icinga can be used to monitor anything that speaks IP, as deep as you can go with SNMP and custom plug-ins and add-ons.

Icinga
Icinga offers a thorough monitoring and alerting framework that’s designed to be as open and extensible as Nagios is, but with several different Web UI options.


There are several Web UIs for Icinga, and one major differentiator from Nagios is the configuration, which can be done via the Web UI rather than through configuration files. For those who'd rather manage their configurations outside of the command line, this is a significant benefit.
Icinga integrates with a variety of graphing and monitoring packages such as PNP4Nagios, inGraph, and Graphite, providing solid performance visualizations. Icinga also has extended reporting capabilities.

 

4) NeDi


If you've ever had to search for a device on your network by telnetting into switches and doing MAC address lookups, or you simply wish you could tell where a certain device is physically located (or, perhaps more important, where it was located), then you should take a good look at NeDi.
NeDi is a LAMP application that regularly walks the MAC address and ARP tables on your network switches, cataloging every device it discovers in a local database. It’s not as well-known as some other projects, but it can be a very handy tool in corporate networks where devices are moving around constantly.
You can log into the NeDi Web GUI and conduct searches to determine the switch, switch port, or wireless AP of any device by MAC address, IP address, or DNS name. NeDi collects as much information as possible from every network device it encounters, pulling serial numbers, firmware and software versions, current temps, module configurations, and so forth. You can even use NeDi to flag MAC addresses of devices that are missing or stolen. If they appear on the network again, NeDi will let you know.

NeDi
NeDi continuously walks through a network infrastructure and catalogs devices, keeping track of everything it discovers.

Discovery runs from cron at set intervals. Configuration is straightforward, with a single config file that allows for a significant amount of customization, including the ability to skip devices based on regular expressions or network-border definitions. You can even include seed lists of devices to query if the network is separated by undiscoverable boundaries, as in the case of an MPLS network. NeDi usually uses Cisco Discovery Protocol or Link Layer Discovery Protocol, discovering new switches and routers as it rolls through the network, then connecting to them to collect their information. Once the initial configuration has been set, running a discovery is fairly quick.
NeDi integrates with Cacti to some degree, and if provided with the credentials to a functional Cacti installation, device discoveries will link to the associated Cacti graphs for that device.

 

5) Ntop


The Ntop project -- now known as Ntopng, for "next generation" -- has come a long way over the past decade. Call it Ntop or Ntopng, what you get is a top-notch network traffic monitor married to a fast and simple Web GUI. It's written in C and completely self-contained. You run a single process configured to watch a specific network interface, and that's about all there is to it.
Ntop provides easily digestible graphs and tables showing current and past network traffic, including protocol, source, destination, and history of specific transactions, as well as the hosts on either end. You'll also find an impressive array of network utilization graphs, live maps, and trends, along with a plug-in framework for an array of add-ons such as NetFlow and sFlow monitors. There’s even the Nbox, a hardware monitor that embeds Ntop.
Ntop even incorporates a lightweight Lua API framework that can be used to support extensions via scripting languages. Ntop can also store host data in RRD files for persistent data collection.

Ntopng
Ntop is a packet sniffing tool with a slick Web UI that displays live data on network traffic. Host data flow and host communication pair information is also available in real time.

One of the handiest uses of Ntopng is on-the-spot traffic checkups. When one of my Cacti-driven PHP Weathermaps suddenly shows a collection of network links running in the red, I know that those links exceed 85 percent utilization, but I don't know why. By switching to an Ntopng process watching that network segment, I can pull a minute-by-minute report of the top talkers and immediately know which hosts are responsible and what traffic they're pushing.
That kind of visibility is invaluable, and it's very easy to come by. Essentially, you can run Ntopng on any interface that's been configured at the switch level to monitor another port or VLAN. That's it.

 

6) Zabbix


Zabbix is a full-scale network- and system-monitoring tool that combines several functions into a single Web-based console. It can be configured to monitor and collect data from a wide variety of servers and network gear, offering service and performance monitoring of each object.
Zabbix works with agents running on monitored systems, though it can also run agentless using SNMP or other monitoring methods such as remote checks on open services like SMTP and HTTP. It explicitly supports VMware and other virtualization hypervisors, producing in-depth data on hypervisor performance and activity. Special attention is also paid to monitoring Java application servers, Web services, and databases.
Hosts can be added manually or through an autodiscovery process. An extensive set of default templates apply to the most common use cases such as Linux, FreeBSD, and Windows servers; well-known services such as SMTP and HTTP, and ICMP and IPMI devices for in-depth hardware monitoring. In addition, custom checks written in Perl, Python, or nearly any language can be integrated into Zabbix.

Zabbix
Zabbix monitors servers and networks with an extensive array of tools, including tools for monitoring virtualization hypervisors and Web application stacks.

Zabbix also offers customizable dashboards and Web UI displays to focus attention on your most critical components. Notifications and escalations can draw on customizable actions that can be applied to hosts or groups of hosts. Actions can even be configured to trigger remote commands, so a script can be run on a monitored host if certain event criteria are observed.
Zabbix graphs performance data such as network throughput and CPU utilization, as well as collects them in customizable displays. Further, Zabbix supports customizable maps, screens, and even slideshows that display the current status of monitored devices.
Zabbix can be daunting to implement initially, but prudent use of templates and autodiscovery can ease the integration hassles. In addition to an installable package, Zabbix is available as a virtual appliance for several popular hypervisors.

 

7) Observium


Observium is a network and host monitor that can scan ranges of addresses for systems to monitor using common SNMP credentials. Packaged as a LAMP application, Observium is relatively easy to set up and configure, requiring the usual installations of Apache, PHP, and MySQL, database creation, Apache configuration, and the like. It is designed to be installed as its own server with a dedicated URL, rather than under a larger Web tree.
From there, you can log into the GUI and start adding hosts and networks, as well as autodiscovery ranges and SNMP data to have Observium crawl around the network and gather data on each system discovered. Observium can also discover network devices via CDP, LLDP, or FDP, and host agents can be deployed to Linux systems to aid in data collection.

Observium
Observium combines system and network monitoring with performance trending. It can be configured to track almost any available metric. 

All of this data is presented in an easily navigated user interface that provides a multitude of statistics, charts, and graphs. This includes everything from ping and SNMP response times to graphs of IP throughput, fragmentation, packet counts, and so forth. Depending on the device, this data will be available for every port discovered and include an inventory of modular devices.
For servers, Observium will display CPU, RAM, storage, swap, temperature, and event log status. You can incorporate data collection and performance graphing on services as well, including Apache, MySQL, BIND, Memcached, Postfix, and others.
Observium plays nice as a VM, so can quickly become a go-to tool for server and network status information. It's a great way to bring autodiscovery and charting to a network of any size.

 

Do-it-yourself


Too often, IT administrators think they can't color outside the lines. Whether we're dealing with a custom application or an "unsupported" piece of hardware, many of us believe that if a monitoring tool can't handle it immediately, it can't be handled. That's simply not the case, and with a little bit of elbow grease, almost anything can be monitored, cataloged, and made more visible.
An example might be a custom application with a database back end, like a Web store or an internal finance application. Management wants to see pretty graphs and charts depicting usage data in some form or another. If you're using, say, Cacti already, you have several ways to bring this data into the fold, such as constructing a simple Perl or PHP script to run queries on the database and pass counts back to Cacti or even an SNMP call to the database server using private MIBs (management information bases). It can be done, and it can generally be done easily.
If it's unsupported hardware, as long as it speaks SNMP, you can most likely get at the data you need, though it may take a little research. Once you have the right MIBs to query, you can then use that information to write or adapt plug-ins to collect that data. In many cases, you can even integrate your cloud services into this monitoring by using standard SNMP on those instances, or by using an API provided by your cloud vendor. Just because you have cloud services doesn’t mean you should trust all your monitoring to your cloud provider. The provider doesn’t know your application and service stack as well as you do.
Getting most of these tools running usually isn't much of a challenge. They typically have packages available to download for most popular Linux distributions, if they aren't already in the package list. In some cases, they may come preconfigured as a virtual server. Configuring and tweaking the tools can take quite a while depending on the size of the infrastructure, but getting them going initially is usually a cinch. At the very least, they’re worth a test-drive.
No matter which of these tools you use to keep tabs on your infrastructure, it will essentially provide the equivalent of at least one more IT admin -- one that can't necessarily fix anything, but one that watches everything, 24/7/365. The up-front time investment is well worth the effort, no matter which way you cut it. Be sure to run a small set of autonomous monitoring tools on another server, watching the main monitoring server. This is a case where it's always best to ensure the watcher is being watched.

Droid Turbo: A beefy Android smartphone with better battery life

Motorola Droid Turbo

Motorola's Droid Turbo isn't sexy, but it offers speed, helpful software extensions, and the ability to go two days between charges

A common complaint of Android smartphone users is poor battery life. Motorola Mobility, now a subsidiary of Lenovo, has targeted such gripes with big-battery versions of some of its Droid smartphones. The new Droid Turbo — which costs $600 for the 32GB model and $650 for the 64GB model without a two-year contract and is currently available only from Verizon Wireless — is essentially a Moto X with a bigger battery, bringing more stamina to Motorola's current flagship phone.  
You indeed get better battery life: about the same as a (smaller) iPhone 6, meaning you can count on a day's full use as long as you have a 3G or better cellular connection. You'll get a couple of days between charges if your usage is mainly data over Wi-Fi or LTE. (Motorola's claims of two days of usage for high-volume callers is, well, optimistic.) 
Motorola includes what it calls a turbo charger for the Droid Turbo. It claims the turbo charger can charge a depleted Turbo in only 15 minutes. My experience is that it takes at least three times as long if the phone is not turned off. The turbo charging also slows down if your battery has more juice; it takes longer to charge a half-depleted Droid than a fully depleted unit. Honestly, it's not appreciably faster than other wall chargers.

 

A beefier smartphone with the hardware strength you'd expect


The big trade-off in the Droid Turbo is its heft: The phone is slightly thicker and heavier than most other smartphones because of that extra battery. That's not a problem — smartphones have become so light and thin that a "heavy" phone today is still quite comfortable and easy to grip and hold.
But for the record, at its thickest point, a Droid Turbo is 0.42 inch, versus 0.39 inch for the Moto X and 0.27 inch for the iPhone 6. As for weight, the Droid Turbo weighs 6.2 ounces, versus 5.1 ounces for the Moto X and 4.6 ounces for the iPhone 6.
The Droid Turbo sports an old-fashioned design that was more common in the 1990s: that executive style of shiny black with chrome highlights. It's not ugly, but it's not sexy either.
I'm not a fan of the Kevlar back, which feels tacky and heats up considerably while the device is charging or using its radios. On the plus side, it helps with the grip. You can buy a Droid Turbo model with a different back material— woven nylon — but I did not have such a device to test.
As for the Droid Turbo's processor, internal storage, screen resolution, and other hardware specs, the 5.2-inch-screen Droid Turbo is well-equipped. It's a speedy, capable device that will handle any serious mobile user's computing needs. These days, that's par for the course in a high-end device. Don't get hung up on such specs. 

 

Motorola's Moto Assist software helps set it apart


More interesting are the Motorola software extensions for the Droid Turbo. Its status widget, for example, is a very convenient tool to see weather, time, battery status, and calendar alerts in one place.

Motorola Droid Turbo status widget (Android smartphone)
The Droid Turbo's status widget provides a lot of handy status information in one place, such as time, weather, battery level, and upcoming appointments.

Its Moto services — available for a couple years now on various Motorola phones such as the Moto X and G — are interesting, too. Essentially, they provide a collection of conveniences, such as a voice assistant in the style of "OK, Google" or "Hey, Siri." (Unlike Apple's "Hey, Siri," Android's "OK, Google" and the Droid Turbo's voice assistant work even if the phone is not plugged into a power outlet or powered USB port.) But given that Android has this capability anyhow, I'm not sure why Motorola has its own version, especially as it seems to work exactly like Google's.
Another Moto service notices when you reach for the phone and displays your current status, such as the lock icon if the phone is locked, an icon saying you have new mail, and so on. But the feature would be more useful if you could more easily act on what the screen shows. Unfortunately, there's no direct interaction available as there is in, say, the iOS or forthcoming Android Lollipop lock screen.
For example, when the Droid Turbo shows a lock icon as I reach for it, it would be nice to unlock the device from that icon. But you can't — you have to push the power button instead. (The normal Android lock icon appears if you tap the screen, but disappears before you can swipe it. The Moto service seems to override that Android feature.)
More useful are the Moto Assist features, such as automatically silencing the ringer during hours you set (similar to iOS's Do Not Disturb) or when the room is dark (a nice idea when you're in bed or in a movie theater). Moto Assist can have the Droid Turbo notice when you're driving (presumably by your speed) and automatically switch to Bluetooth output for audio and speak aloud text messages and callers' names while you drive.

Motorola Droid Turbo Moto Assist (Android smartphone)
The Moto Assist software that comes with the Droid Turbo offers useful assistive capabilities, such as silencing your phone during meetings, while asleep, and when in dark rooms.

My favorite Moto Assist feature is the one that silences your ringer and can optionally autoreply to calls via text messages while you're in a meeting — it checks your calendar to know when to go quiet. That's great!
The Droid Turbo won't be many people's top choice for an Android smartphone; the HTC One M8 has better visual appeal, for example. But the Droid Turbo should be in your final cut, along with the HTC One, Moto X, and Samsung Galaxy S5. 

Source: http://www.infoworld.com

Drupal sites, assume you've been hacked

Security alert for incoming threats.

SQL injection bug threatens the websites of enterprises, governments, and many other institutions using the open source Drupal CMS

Word broke yesterday of a major-league security issue involving Drupal, the open source content management system (CMS) used widely in enterprises and government. Come to think of it, "major league" doesn't begin to cover it: Drupal developers have admitted that if your installation wasn't patched before Oct. 15, 11 p.m. UTC, it's best to consider the entire site compromised.
How deep does the compromise run? Deep enough that simply upgrading to the latest version of Drupal won't help, and patching an affected website is only the first of many mitigation steps required.
Drupal has long been a staple of enterprise CMSes, powering sites as diverse as Whitehouse.gov and even InfoWorld.com itself at one point. Version 7, unveiled in 2011, was built with features designed specifically to appeal to enterprise users.
Attackers began making use of the vulnerability to launch automated SQL-injection attacks against websites within hours of its original disclosure, according to Web security research film Sucuri. The bug wasn't detected by Drupal's development team, but by an independent researcher referencing a bug that had been known since November of last year.
Acquia, the company that provides professional services, support, and hosting for Drupal, unveiled cloud-hosted versions of Drupal for business-grade deployments as another spur to adoption. The company began providing commercial support for Drupal back in 2008 and soon found around half of its customers were small businesses, with enterprises, public-sector outfits, nonprofits, and education forming the rest.
After the attack hit, the company claims it took proactive steps to protect customers running Drupal installations in its cloud -- the kind of protection the company touts as one of the advantages of using a hosted and managed installation of Drupal. According to Acquia, other commercial Drupal vendors (mainly Platform.sh and Pantheon) "all implemented different platform-wide protections for our respective customers, " with the three companies collaborating together on possible solutions. 
One major takeaway is the speed at which attackers were able to leverage information about the exploit as word of it emerged. It shows today's cyber criminals are well-prepared to take advantage of a known exploit, especially one that uses a widely understood delivery method such as a SQL injection.
InfoWorld's Roger Grimes expressed concern about the future of malware and the idea that "a vendor releases a patch and every possible machine is exploited before anyone even wakes up," as he put it in an email. "Does it eventually become a race between the vendor and malware writer for customer trust? ... Most bad guys don't want to exploit every computer immediately because all that does is ramp up the patching speed, and that's counterproductive to what they want."

Source: http://www.infoworld.com

Google updates cloud with new virtual technologies and price cuts

Google has embraced the Docker container technology and expanded the Firebase mobile development platform

Continuing to keep pace with chief cloud rivals Amazon Web Services and Microsoft Azure, Google has made a number of improvements to its Google Cloud Platform services.
A series of announcements made to show the company embracing the newest virtual technologies, such as Docker containers and the Firebase platform to aid mobile developers, as it continues to cut prices of its services.
In the realm of virtualization, the company has devised a way to make it easy to use Docker containers, a new lightweight virtualization technology. The company has devised the Google Container Engine service for building and running Docker containers. The engine is based on the open-source Kubernetes project.
The company also explained what it is doing with the mobile platform technology created by Firebase, which Google bought last month. Firebase offers a way to speed the process of connecting mobile applications to back-end data sources.
Google has expanded the range of queries that can be made against the data sets held by Firebase. Users can now sort the data by arbitrary fields, as well as filter the data. The service also now offers triggers, in which developers can define certain actions to take place if a set of conditions are met.
Google has expanded the number of ways users can connect to the cloud service.
The company now offers direct peering, in which corporate customers can setup a network link directly into a Google data center. Google offers 70 points of presence in 33 countries. The company also can provide dedicated connectivity through seven carriers: Verizon, Equinix, IX Reach, Level 3, Tata Communications, Telx, and Zayo.
Expanding the connectivity options even further, Google will start offering VPN connections, which can provide a secure pipeline over the public Internet.
Like rivals Amazon Web Services and Microsoft, Google continues to aggressively cut prices of its services, which it vowed to continue to do as the price of hardware decreases. The cost of copying data from the Google Cloud has been cut by 47 percent. Many cuts have been made in storage space as well: the cost of BigQuery storage has been cut by 23 percent, persistent disk snapshots have been cut by 79 percent, solid-state storage has been cut by 48 percent, and Cloud SQL storage costs have been cut by 25 percent.
The company introduced a number of other features to its cloud, including:
  • Managed virtual machines, introduced earlier this year, is in full beta release.
  • A debugger, in beta form, that could provide users with more information when services don't operate as expected.
  • A type of compute engine based on solid-state disks. The "Local SSD" compute engine can execute up to 680,000 read IOPS (input/output operations per second) or 280,000 write IOPS.
  • An autoscaler has been released in beta that can automatically grow or shrink a fleet of virtual machines based on customer needs.

Cloud expected to make up three-fourths of data center traffic by 2018

business cloudservices flowchart thinkstock

Cisco predicts by 2018, a quarter of the world's population will use personal cloud storage

Over the next four years, data center traffic is expected to nearly triple, largely thanks to a booming cloud computing industry.
That’s according to Cisco’s fourth annual Global Cloud Index, which predicts that the cloud will account for 76 percent of total data center traffic by 2018. That’s up from the cloud's accounting for 54 percent of data center traffic last year.
The report, released on Tuesday, also showed that by 2018, half of the world's population will have residential Internet access, and 53 percent of those users will store content on personal cloud storage services.
Cisco's study was released the same day that Google renewed its push to pick up momentum in the public cloud market by dropping prices and adding and updating features.
Google, along with cloud rivals Amazon, IBM and Microsoft, are pushing their cloud efforts because the market is growing so fast.
In the spring Forrester Research reported that the public cloud market is set for “hypergrowth,” and is expected to reach $191 billion by 2020. That’s a big jump from the $58 billion market at the end of 2013.
While the public cloud is showing a 50 percent growth rate, the growth rates of both hybrid and private clouds come in at a strong 40 percent and 45 percent, according to Synergy Research Group.
Cisco’s report echoes similar growth predictions.
The study predicts that global data center traffic will nearly triple from 2013 to 2018, growing from 3.1 zettabytes per year in 2013 to 8.6 zettabytes per year. A zettabyte is a trillion gigabytes.
Cisco said 8.6 zettabytes of data center traffic is equivalent to streaming all of the approximately 500,000 movies and 3 million television shows ever made in ultra-high definition 250,000 times.
The company’s report is calculated by using data from server shipments to data centers, the installed base of workloads and the volume of bytes per workload per month. The company said it used data from market research firms Gartner, IDC, Synergy Research and Juniper Research.
This story, "Cloud expected to make up three-fourths of data center traffic by 2018" was originally published by Computerworld.

Thursday 6 November 2014

Sharp says China smartphone screen shipments may exceed target



(Reuters) - Japanese display maker Sharp Corp, a supplier to Apple Inc, said its shipments of screens to Chinese smartphone makers may exceed its target for the fiscal year to next March as it expands its business to new models.

Norikazu Hohshi, head of Sharp s device business, said on Thursday that the company would be shipping screens to 15 Chinese smartphone manufacturers this fiscal year and that it was in talks to supply screens for 25 new Chinese models, with shipments to begin as early as the January-March quarter.

The recent rise of Chinese smartphone manufacturers such as Xiaomi Technology Co Ltd [XTC.UL] helped to boost Sharp s shipments of small and mid-sized liquid crystal displays by around 50 percent in the six months to end-September, to $1 billion.

The company forecasts the business will bring in $2 billion in revenue for the full year to next March.
Worries have mounted about softening prices of LCD screens for Chinese smartphones. Japan Display Inc, the world s largest maker of LCD screens for smartphones, last month warned that it expected a 10 billion yen ($87 million) net loss for the year to March, reversing a previous forecast for a net profit.

Hohshi acknowledged that there had been a sudden drop in LCD screen prices for Chinese smartphones over the past six months but said this had not affected the high-resolution screens that Sharp is supplying.

Source: http://dunyanews.tv

New malware targeting Apple devices identified

New malware targeting Apple devices identified

NEW YORK: Palo Alto Networks Inc has discovered a new family of malware that can infect Apple Inc's desktop and mobile operating systems, underscoring the increasing sophistication of attacks on iPhones and Mac computers.

The "WireLurker" malware can install third-party applications on regular, non-jailbroken iOS devices and hop from infected Macs onto iPhones through USB connector-cables, said Ryan Olson, intelligence director for the company's Unit 42 division.

Palo Alto Networks said on Wednesday it had seen indications that the attackers were Chinese. The malware originated from a Chinese third-party apps store and appeared to have mostly affected users within the country.

The malware spread through infected apps uploaded to the apps store, that were in turn downloaded onto Mac computers. According to the company, more than 400 such infected apps had been downloaded over 350,000 times so far.

It's unclear what the objective of the attacks was. There is no evidence that the attackers had made off with anything more sensitive than messaging IDs and contacts from users' address books, Olson added.

But "they could just as easily take your Apple ID or do something else that's bad news," he said in an interview.

Apple, which Olson said was notified a couple weeks ago, did not respond to requests for comment on Wednesday.

Once WireLurker gets on an iPhone, it can go on to infect existing apps on the device, somewhat akin to how a traditional virus infects computer software programs. Olson said it was the first time he had seen it in action. "It's the first time we've seen anyone doing it in the wild," he added. – Reuters

Source: http://www.samaa.tv

The big one: The makings of a global cyber attack

globe with world map and circuit board in background digital 200213603 001

Surprise -- the underlying technology matters less to an attack's success than basic human determination

When a potentially major security flaw gets announced, Ć  la SandWorm, Shellshock, and Heartbleed, those of us in the computer security field can’t be sure it’s a “big one” that would attack or compromise the majority of the computers in the world or your enterprise. Whether the technical methods are familiar or novel, most of the discovered attack methods don’t go big.
We’ve had lots of “big ones” in the past. The Robert Morris worm of 1988 infected around 6,000 computers. That doesn’t sound like a lot today, but back then, it represented about 10 percent of the computers hooked to the Internet. Since then, far bigger and faster-spreading worms appeared, most notably Michelangelo, Code Red, Melissa, SQL Slammer, ILoveYou, and Blaster.  
In those heady days, a single infection would turn into a global outbreak in a day or less. The record belongs to SQL Slammer, which infected nearly every unpatched SQL server on the Internet and clients running SQL in about 10 minutes.
Luckily, we haven’t seen a worm go global at such a pace in a while. Gladly behind us are the days when we had to shut down the mail server, get everyone off their computers so that we could clean them up, and call everyone who received one of our infected emails. Then again, maybe I shouldn’t be so confident. Now we have to worry about advanced human attackers that steal intellectual property and money. I’d love to fret about a simple, noncriminal malware program.
We are very bad at predicting what vulnerability will go global. As with our real-life wars, no one can predict which conflict will turn into a global world war until we are in it. As in the digital world, real-life experts are constantly predicting the latest conflict will lead to nuclear Armageddon. But it hasn’t happened.
In the digital world, for an infestation to quickly go global, it must be “wormable,” meaning that a hacker can take advantage of the vulnerability using roving malicious code that bounces from computer to computer, instead of having to manually test each computer. If it can’t be wormed, it probably won't go international.
That’s the conventional thinking today. Perhaps in the future a malicious coder will mess with a big cloud service and create a new malware propagation method. Viruses, which are malicious piece of code that infect other code or documents to spread, can go global quickly, too. But they aren’t as popular as worms anymore.
However, most worms and viruses don’t go big. Why? Because there is a huge gap between ability and action, between capability and causation.
I don’t know why some malware programs go big and others don’t, but I have noticed a few ways to categorize those that went global:
  • Vulnerabilities that we knew about, that we worried about, and that still went big, such as Blaster and Michelangelo -- these are uncommon
  • Techniques that come out of the blue and surprise us all, such as SQL Slammer and Code Red
  • Techniques that we knew about for a while but for unknown reasons take off later than when they were discovered, such as Melissa and ILoveYou
  • Long-known public techniques used continuously by multiple attackers over long periods of time, such as spearphishing and pass-the-hash attacks (popular today)

Those broad classes don’t help identify what might be a “big one.” What causes an attack to be a “big one” remains a mystery to the computer defense industry. But I believe three nontechnical factors are often involved:
  1. Motivation and intent: A criminal agency, a spy organization, or another entity wants to use a method to achieve one or more goals.
  2. Loss of control: The malware coder didn’t seem to realize how quickly his creation would spread, such as with the Robert Morris worm, SQL Slammer, and Melissa.
  3. Placement and timing: The malware happened to resonate with people. For example, I’ve always believed Melissa went global because its creator promised free porn in a day when free porn wasn’t the norm.
If I had to pick one reason a worm went global, I'd have to go with motivation and intent. Many of the hacks we worried about didn’t happen until a bad guy finally tried it, such as Kerberos ticket manipulation.
I’m sure there are other factors I’m not thinking about right now. But I know that capability and potential are still poorly correlated with actual damage. If we could better predict what will go big, our job would be a lot easier.
You can think of cyber threats the same way the military thinks about weapons of mass destruction: Many nations (even individuals) know how to build weapons of mass destruction. The major entities won’t use them unless absolutely necessary, if ever. But now more entities have access to them than can possibly be controlled over the long term.
Someday a weapon of mass destruction will be used against a major (unsuspecting) population. That day is coming, and we can’t possibly predict when. The capability and potential have been there for a long time; it’s a question of timing.
Even more unsettling, it doesn't matter if we make it harder or easier to carry off the big attacks that will cause huge disruption. Plus, we are so poor at computer security (in general) that we give attackers dozens to hundreds of avenues to try when they get motivated. 

Source: http://www.infoworld.com

Cyber espionage group launches sophisticated phishing attacks against Outlook Web App users

A cyberespionage group has been using advanced spear-phishing techniques to steal email log-in credentials from the employees of military agencies, embassies, defense contractors and international media outlets that use Office 365's Outlook Web App.

The group behind the attack campaign has been operating since at least 2007 according to researchers from Trend Micro, who published a research paper on Wednesday about the attacks they dubbed Operation Pawn Storm.

The Pawn Storm attackers have used a variety of techniques over the years to compromise their targets, including spear-phishing emails with malicious Microsoft Office attachments that installed a backdoor-type malware program called SEDNIT or Sofacy, or selective exploits injected into compromised legitimate websites.

The group used one particularly interesting technique in email phishing attacks against organizations that use the Outlook Web App (OWA), which is part of Microsoft's Office 365 service.

For each phishing attack, the group created two fake domains: one very similar to that of a third-party website known to the victims -- like that of an upcoming industry conference for example -- and one similar to the domain used by the targeted organization's Outlook Web App deployment.

The attackers then crafted phishing emails with a link to the fake third-party site where they hosted non-malicious JavaScript code whose purpose was twofold: to open the actual legitimate site in a new tab and to redirect the already opened Outlook Web App browser tab to a phishing page.

"The JavaScript made it appear that the victims' OWA sessions ended while at the same time, tricked them into reentering their credentials," the Trend Micro researchers wrote in their paper. "To do this, the attackers redirected victims to fake OWA log-in pages by setting their browsers' open windows property."

This technique does not exploit any vulnerabilities and works in any popular browser, including Internet Explorer, Mozilla Firefox, Google Chrome and Apple's Safari, the researchers said. However, two conditions need to be met: the victims need to use OWA and they need to click on the embedded links from OWA's preview pane, they said.

This can be a powerful attack, because the victims know they had a legitimate OWA session opened in that browser tab and might not check if the URL has changed before re-entering their credentials.
In addition to using domain names that were very similar to those used by the targeted organizations for their real OWA log-in pages, in some cases the attackers even purchased legitimate SSL certificates so that the victims' browsers display the HTTPS secure connection indicators for the phishing sites, the Trend Micro researchers said.

Among those targeted with this technique were employees of the U.S. private military company ACADEMI, formerly known as Blackwater; the Organization for Security and Co-operation in Europe (OSCE); the U.S. Department of State; U.S. government contractor SAIC; a multinational company based in Germany; the Vatican Embassy in Iraq; broadcasting companies in several countries; the defense ministries of France and Hungary, Pakistani military officials; Polish government employees, and military attachƩs from various countries.

The phishing baits used by the attackers included well-known events and conferences that their victims were interested in.

"Apart from effective phishing tactics, the threat actors used a combination of proven targeted attack staples to compromise systems and get in to target networks -- exploits and data-stealing malware," the Trend Micro researchers said. "SEDNIT variants particularly proved useful, as these allowed the threat actors to steal all manners of sensitive information from the victims' computers while effectively evading detection."

Source: http://www.infoworld.com

How the Republican Senate will impact tech


Expect a much higher cap on H-1B visas, no progress on Net neutrality, and a decent chance at patent reform

Early in his first term, President Obama told defeated Republicans that “elections have consequences.” Now that the Republicans will control the Senate and have an even bigger margin in the House, the proverbial shoe is on the other foot, and there will be consequences for the tech industry, some good, some bad, some too close to call.  
You’ll likely see changes in immigration policy favoring companies that want to raise the cap on H-1B visas, but the already slim chances of any significant action to protect Net neutrality will fade to zero. There may well be legislation to continue the reform of the patent system, but more Congressional action to further curb government spying and data vacuuming is uncertain at best. 
The tech industry has become a serious player in Washington, D.C., learning how to use lobbyists and campaign contributions to push an agenda. That won’t change, but tech companies like Apple, Google, and Microsoft will get less clout per dollar, while telecom and cable giants will live large.
There was one clear victory for tech: the re-election of Sen. Al Franken (D-Minn.). Franken has been firm in support of Net neutrality and against the NSA’s campaign of spying. He’s also on Comcast’s bad list -- a badge of honor -- because he opposes the dangerous merger of Comcast and Time-Warner Cable.

 

Net neutrality: No way, no how


No single tech-related issue will be more affected by Tuesday's U.S. election results than Net neutrality. Blocking encroachments by carriers and cable companies was already a long shot. FCC Chairman Tom Wheeler has proved to be a weak-kneed defender of the Internet, and facing a unified Republican Congress will put no starch in his spine.
Even if Wheeler suddenly gets religion and reclassifies ISPs as common carriers that can be regulated by the FCC, Congress will almost certainly attempt to kill it. As the Washington Post noted, “the Gingrich-era Congressional Review Act gives Congress the power to erase specific agency rules. Indeed, after the FCC passed its first round of open Internet rules in 2010, the Republican-led House passed a Resolution of Disapproval, arguing that the agency should not weigh in on the issue at all.”
However, it’s possible that President Obama would veto such a move, or maybe Wheeler will move fast enough so that Republicans won’t be able to pass it in the lame-duck session.
Making matters worse was the defeat of Sen. Mark Udall in Colorado – a staunch advocate for Net neutrality and privacy -- by Rep. Cory Gardner, who has been on the record against reclassification, and the easy re-election of Sen. Fred Upton (R-S.D.), who will remain chairman of the powerful House Energy and Commerce Committee, which has jurisdiction over Internet, telecommunications, and the media industries.

 

More H-1B visas


This is one the tech industry will likely win -- though as I’ve argued for years, it will be a defeat for IT workers. 
Silicon Valley CEO after CEO has spoken, written op-eds, and spent money in the pursuit of a higher cap on H-1B visas. There’s a fair amount of support for it on both sides of the aisle, so the addition of six or seven Republicans will make it easier to pass. Ironically, one of the strongest opponents of H-1B abuse is Sen. Chuck Grassley (R-Iowa), whose term has a few years to run.
It’s not as if the flow of H-1B visa holders is a mere trickle. Last year, tech companies snapped up all 65,000 H-1B slots in five -- count 'em -- days. This year, all 65,000 (plus 20,000 more for holders of advanced degrees) were gone in, yes, five days, despite the marked slowing in IT hiring in the early part of the year and the increased use of benefit-deprived contractors.
The tech barons want more. My guess is they’ll get it this time.
There is one caveat. While Republicans would like to raise the cap, they have little or no interest in broader immigration reform. Obama would certainly prefer a broader bill, so he might block a very narrow one. But given his ties to the tech industry, I suspect he’d sign it.

 

Will the new Congress rein in the NSA?


This is interesting and not a foregone conclusion by any means, says Mark Jaycox, legislative analyst for the Electronic Frontier Foundation. Two of the new Republican senators, Gardner in Utah and Joni Ernst in Iowa, have decent records on government intrusion, he says.
In some ways this is not the most partisan issue in the Senate. California’s Dianne Feinstein -- who chairs the Intelligence committee -- is no friend of privacy, but libertarians like Kentucky’s Rand Paul and some younger Republicans are at least wary of government intrusion. 
EFF's Jaycox notes that much will depend on who the new majority leader seats on the Intelligence and Judiciary committees. However that shakes out, “it is a sad day when Sen. Udall lost; he’s been at forefront of transparency issues and the right of the public to be informed,” he said.
Another privacy advocate, Sen. Mark Begich (D-Alaska) may also lose his seat, though as I write this, the election is still too close to call.
Frankly, the tech industry will have to lead on this one. Apple and other companies that are losing overseas business and customer confidence are adding encryption and more security features to their products and pushing back against overarching demands for customer data.

 

Patent reform looking better


Because outgoing majority leader Harry Reid of Nevada and other Dems are generally in bed with trial lawyers, they’ve hesitated to lean on the patent trolls. Republicans are more likely to go ahead with legislation that stops stupid litigation. Indeed, they helped pass an antitroll bill in the House, but it died in the Senate. Still, there’s been progress.
It’s no slam dunk, though. Reform advocates, many of whom are close to the Democrats, will have to reach out to the GOP and “credibly voice small-business concerns as opposed to coming across as IP socialists,” says Florian Mueller, a patent activist and consultant.
Tech has a good deal of influence in Washington, but it isn’t always beloved, even on its home turf. Although Silicon Valley isn’t in the throes of a San Francisco-style antitech rebellion, people in the Valley don’t always trust the digerati. One example: The apparent loss of tech-backed House candidate Ro Khanna to incumbent Mike Honda, an old-line, pro-labor liberal.
How all this translates into actions that affect tech workers and the industry will take a little while to become clear.
Congress moves very slowly, and as we get closer to the next presidential campaign, it will move even slower. There might be a window of a year or so to get anything done in Washington, whether it's about tech or other issues. We’ll see.

Source: http://www.infoworld.com

The canary in the data mine is dead

big data blue

You already know that gobs of data about you are strewn across the Internet. The scary part is when they put it all together

Recently, I read a tweet from one of my favorite journalists and activists, Asher Wolf, about Samaritans Radar, an app that mines Twitter for keywords indicating someone might be suicidal or “struggling to cope.”

Concerns have been raised about privacy issues with Samaritans Radar -- which should be taken with a grain of salt, because it mostly mines public tweets. You could say that being upset about this is the digital equivalent of yelling in the town square, then grousing the next day that someone quoted you in the paper.

More accurately, Samaritans Radar is like putting a recording device in every town square and monitoring it for catchphrases, sort of like what the NSA now does with ... everything. Samaritans Radar is a bit creepy, but in the wrong hands, it could also be destructive.

What if I’m a cretin and decide that rather than help those with psychological issues, I’d like to urge them along? If you've ever read Reddit or the comments on YouTube or 4chan, you know that beneath all that great stuff brought to you by the Internet is a sewer teeming with toxic trolls who revel in berating vulnerable naifs, racial minorities, and women.

I'm sure that Samaritans.org is a group of well-meaning people who happened to lack a critical thinker at the helm. Which brings up an important issue: Simply because we can, should we?
Consider this snippet from a recent post by Joe Ferns, executive director of policy, research, and development at Samaritans.org:
We condemn any behavior which would constitute bullying or harassment of anyone using social media. If people experience this kind of behavior as a result of Radar or their support for the App, we would encourage them to report this immediately to Twitter, who take this issue very seriously. 
Nice sentiment, and to be sure, what is yelled in the town square is public. But with the technology to mine the data, correlate it, and republish it, what are the ethical and liability concerns? Samaritans Radar crosses local, provincial, and national boundaries. There must be laws from copyright considerations to privacy and stalking laws. The Samaritans organization may be partly protected, but what about your company?

Hoovering everything for fun and profit

Mining social networks is already commonplace. Say anything nasty about any U.S. airline, and it will respond. The company still won’t fix its crap service, but it'll respond to tweets with canned expressions of sympathy. Some companies even sue people for trashing their brand (which inevitably backfires).

I know how to create a social graph, track the original bad sentiment back to the source, and intervene if necessary using open source tools and technologies. It isn’t even hard. However, are their cases where I should refuse? Are their situations where I’m both ethically and potentially legally obligated to say, “I’m sorry, that is a bad idea”?

I bet most of you don’t even use Twitter. You probably do your capitalist rendition of Maoist self-reporting via Facebook -- and if you’re geek enough, maybe via Google Plus. But what about Verizon, AT&T, and their permacookies? Every unencrypted request you send across the Web from your phone (or possibly tethered from your phone) has an extra header added that uniquely identifies you. All it takes is any piece of identifying information anywhere on the Internet, and everything you do can be tracked by Facebook, Google, and their affiliates.

I realize this isn’t new. Browser cookies have been doing it for almost two decades, but you can delete your browser cookie and start anew. You can go into “incognito mode” or turn off cookies. Verizon is doing this further up the chain.

What are the downsides of such constant exposure? Recently, right after I had some dental work done, I announced I was signing off Twitter for a bit and taking a Percocet. I did this to inform family, friends, and casual followers that I wouldn’t be posting -- or if I did, not to worry if I posted something strange. I didn’t intend to be added to a government or corporate database of potential drug offenders, possibly cataloged for risk, and potentially subjected to ads for other pain medications or medical treatments. Do you have any doubt that at least some of that happened?

When it comes to data about you, which you didn’t directly intend to communicate, we only have the “terms of service” -- which protects a company’s right to collect your data and use however it pleases. Nothing really protects you.

Should any legal protections or regulations be put into place? Could they be enforced? Do we all have to start throwing CryptoParties using Tor and the various alternatives to CipherShed?
That seems like a lot more work than this Internet thing is worth.

Source: http://www.infoworld.com

jQuery 3.0: More interoperability, less Internet Explorer

PiƱata de Internet Explorer

The upcoming edition of the JavaScript library will come in two versions: one that supports IE8, and one that does not

jQuery is moving toward a 3.0 release anticipated in early 2015, the core developer of the JavaScript library said this week.

Key features planned for version 3.0 include support for the Promises/A+ specification, for interoperable JavaScript promises; use of the requestAnimationFrame method to improve animation performance; and the end of support for the Internet Explorer 6 and 7 browsers, said Dave Methvin, lead developer of the jQuery core and president of the jQuery Foundation.

"jQuery simplifies Web development in three ways," Methvin said in an email. "It eliminates quirks that arise in specific browsers, it provides a simple API for manipulating Web document, and it has an incredible community that creates useful plug-ins you can use on your website."

Developers of jQuery 3.0 are not planning a lot of major architectural chances, so it will be close to a drop-in replacement for older versions, Methvin said. "The most important messages for Web developers is that although it's a major change of numbers, it's nothing to fear. We are happy with jQuery's API and most developers seem to agree. So the changes we anticipate are incremental improvements."

Still, there will be a jQuery Compat 3.0 version, the successor to jQuery 1.11.1, along with jQuery 3.0, the successor to jQuery 2.1.1. Compat will offer compatibility with more browsers, albeit at the expense of file size and possibly performance. "There is still quite a bit of Internet Explorer 8 out there in the world, and we want jQuery to support the Web developers who still need IE8 support," Methvin said. "However, there are performance and size benefits to be had by not supporting IE8 and some older browsers. So we have two packages that can serve those different needs." 

Newer technologies, such as Famo.us, have emerged to boost the JavaScript realm. But Methvin sees Famo.us as a complementary technology rather than as a competitor to jQuery. "For example, you could use the Famo.us rendering engine inside a jQuery plugin."  

Source: http://www.infoworld.com

Docker container or VM? Canonical's LXD splits the difference

stacked shipping containers

With LXD, Docker containers can emulate virtual machines while maintaining close-to-the-metal speed and high security

CoreOS was the first to demonstrate how Docker and containerization could remake Linux. Now Canonical is getting into the game, albeit from a different direction.
Canonical's new project, LXD, or the Linux Container Demon, lets users work with Docker containers to deploy the functional equivalent of full-blown isolated Linux VMs, not merely individual containerized apps.
In a video, Canonical product manager Dustin Kirkland described LXD as a system for running "full-system containers with the performance you'd expect from bare metal, but with the experience you expect from a virtual machine."
LXD uses containers to virtualize the behavior of an entire system, running as close to the metal as possible. Thus, users can launch new machines in less than a second and have an unprecedented degree of density for those LXD machines -- on the order of hundreds of virtualized machines per physical host.
In an email, Kirkland noted that the project grew out of several initiatives: Canonical's work with OpenStack, the company's efforts submitting upstream changes for LXC (the technology Docker is based on), and the needs of its customers. The company "found considerable customer and market interest in running essentially general, full operating system environments within containers," Kirkland explained, "in the interest of greater security, improved performance, higher density, and extensive portability."
Like many container-centric projects these days (Docker included), LXD is written in Go and provides both a CLI and a RESTful API to its functions. It also includes extensions to allow containers to access storage and networking securely, with the security functions using the same technologies as Linux containers: cgroups, user namespaces, and (when vendor support exists for it) hardware-assisted containerization.
Aside from the high density of systems and native-speed performance on the host hardware, LXD also features high-speed live migration. This function, which allows the contents of active containers to move between physical hosts, was built using another feature for which Canonical has submitted work upstream: Checkpoint Restart (CRIU). Kirkland described demos for the feature: "We were playing Doom in one container and live migrated it back and forth between two different hosts, with continuity."
The hardware-assisted containerization feature might raise the most eyebrows. In its effort to make LXD a real hypervisor, Canonical says it's "working with silicon companies to ensure hardware-assisted security and isolation for these containers, just like virtual machines today."
The big disadvantage is that LXD is strictly a Linux-on-Linux solution and exploits functionality only available on Linux at this time. When asked if a Windows port might be possible in the future, given recent word that Microsoft is planning to add containerization support to Windows in some form, Kirkland didn't provide a direct answer: "Due to the nature of containers," he wrote, "LXD can only really ever be Linux on Linux. That's our focus.  Other versions of Linux user space (i.e., non-Ubuntu) can run in LXD. But fundamentally, it will need to be Linux." 

Source: http://www.infoworld.com