Lough Swilly lifeboat volunteers came to the aid of a fishing vessel during a successful mission off Fanad Head on Tuesday afternoon. The fishing boat was six nautical miles away from shore when it got into difficulty due to a fouled prop.The Lough Swilly ALB was launched at 1.10pm to assist the crew. Rescue volunteers attached a tow line to the vessel and landed them at Ballyhoorisky Pier in Fanad.The mission was a success and no injuries were reported.(Feature image via Lough Swilly RNLI.) Fishing boat crew safely rescued by Lough Swilly RNLI was last modified: August 30th, 2018 by Rachel McLaughlinShare this:Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Reddit (Opens in new window)Click to share on Pocket (Opens in new window)Click to share on Telegram (Opens in new window)Click to share on WhatsApp (Opens in new window)Click to share on Skype (Opens in new window)Click to print (Opens in new window)
Brand South africa, in partnership with Junior Chamber International South Africa, invites you to the second “Shape the Future” Forum. The event will discuss the concept of active citizenship, cover the analysis of community needs, and map out how young people can address such challenges by creating sustainable solutions. Follow @Brand_SA for updates from 6:30pm tonight, or follow a live webcast here.
7 July 2014 South African Deputy President Cyril Ramaphosa arrived in Colombo, Sri Lanka on Monday for his first visit as Special Envoy of the President. President Jacob Zuma appointed Ramaphosa as Special Envoy to Sri Lanka and South Sudan during his State of the Nation address in February, following a request from both countries for assistance in bringing about peace and reconciliation. Despite the official end of the decades-long civil war between the Sri Lankan state and the militant separatist Tamil Tigers in 2009, religious conflict continues to dog the south Asian island nation, against a backdrop of rising ethnic tensions between the largely Buddhist majority Sinhalese and minority Muslims. Senior delegations from the Sri Lankan government and the Tamil National Alliance visited South Africa in February and April respectively. Both delegations met Ramaphosa and invited him to visit Sri Lanka. “We are going to listen to the Sri Lankans,” Ramaphosa said ahead of his visit. “We have already met them a few times in South Africa, but this time around we are going to go to Colombo and meet the government, the president of Sri Lanka and a number of other government ministers.” He is also expected to meet opposition parties and travel to the north, where conflict in that country is at its fiercest. “We will discuss with people in the community. We will also try to help the Sri Lankans with the truth and reconciliation process, their own constitutional reform, and make sure Sri Lanka does indeed become a stable country where they will enjoy human rights.” Ramaphosa is being accompanied by Deputy International Relations Minister Nomaindia Mfeketo. He is expected to return to South Africa on Wednesday. SAnews.gov.za and SAinfo reporter
AMTSetupStatus – Provisioning State of the Management EngineAMTVersion – AMT VersionBIOSVersion – System Bios VersionDate – Date the scan was performedDHCPServer – IP Address of the DHCP Server that lease was obtained fromDNSServerOrder – DNS Search OrderFQDN – Fully Qualified name of the host based on NT/AD DomainDNS_FQDN – Fully Qualified DNS Name for the Host AdapterGateway – Gateway (router) IP AddressHECIVersion – HECI Driver VersionMAC – AMT Capable HOST MAC AddressIPAddress – Current IP Address of Host at time of scanLMSVersion – LMS Driver VersionMake – ManufacturerModel – Manufacturer’s ModelSerialNumber – Machine’s Serial NumberSMSSiteCode – SMS Site the local machine is managed by (if available)SubnetMask – Current IP Subnet MaskSystemName – Host machine nameUNSVersion – UNS Driver VersionUUID – The Systems UUIDProvisionServerPing – Ping status for ‘provisionserver’ DNS entry The identification and activation of vPro systems that are not remote configuration capable and that have not completed the provisioning/activation process prior to being placed in the field can be a daunting task in a large enterprise environment. Especially in the common situation where vPro systems have been deployed before backend infrastructure is in place.To help address this, we created a small utility that leverages MEInfo to capture MEBx details related to activation, and store this data in the Windows registry. This allows for automated inventory methods to collect and report the information enterprise wide, allowing detailed planning of remote activation strategy.The utility requires the same set of prerequistes as MEInfo to produce full detail, such as the HECI drivers to be in place, and Administrator priviledges on the local machine, but is small, silent, and Software Deployment friendly.All of the data that iAMT Scan generates is stored in the local system registry:HKEY_LOCAL_MACHINE\HARDWARE\INTEL\iAMT SCAN DATAConsisting of the Following String Value Entries: iAMT Scan v.0.3.0 Use Guide: iAMT Scan v.0.3.0 Executable:
The artificial intelligence (AI) taxonomy spans capabilities that enable systems to sense, reason, act and adapt. These capabilities stem from technologies for machine learning, including deep learning and classic machine learning, as well as the technologies for reasoning systems. In this post, I will focus on the hardware technologies for machine learning (ML). This is the area of AI that enables algorithms to learn from their experiences and improve their performance over time.AI TaxonomyThe machine learning algorithms that are at the heart of many AI solutions bring a unique set of technical challenges. For starters, these algorithms have high arithmetic density. Training a model with a billion parameters (moderately complex network) can take days unless properly optimized and scaled. Further, this process often needs to be repeated to experiment with different topologies to reach the desired level of inferencing accuracy. This process requires a huge amount of computational power.And then there is the data to think about. When you train a model, performance scales with the amount of data you feed into the model. For example, the performance of a speech recognition algorithm might improve greatly—to near human-level performance—if it is fed enough data (up to a point). Of course, this too requires significant amount of memory and computing capacity.Until recently, these challenges were a major blockade in the road to neural networks. While the mathematical concepts behind neural networks have been around for decades, until now we have lacked the combination of technologies required to accelerate the adoption of deep learning. That combination has two key components: enough compute to build sufficiently expressive networks, and enough data to train generalizable networks.Today, thanks to Moore’s LawOpens in a new window, the rapid digitization of content around us, and the accelerating pace of algorithmic innovation, we have overcome both of these challenges. We now have the compute and data we need for neural networks. A variety of processing platforms exist for executing deep learning workloads at the speeds required for AI solutions, even as the datasets consumed by the models grow larger and larger.Solutions for Every Business NeedWith recent acquisitions, Intel now offers four platforms for AI solutions. People sometimes ask me why we would need four platforms for AI. The answer is that different AI use cases have different platform requirements:Intel® Xeon® processorsWith machine learning and deep learning solutions, most of the processing time involves data management, such as bringing data into the system and cleaning it up. The compute time is a smaller part of the problem. This mix of needs—heavy on management, less so on compute—is best done on the Intel Xeon processor platform, the world’s most widely deployed machine learning platformOpens in a new window. Intel Xeon processors are optimized for a wide variety of data center workloads, enabling flexible data center infrastructure.Intel® Xeon Phi™ processorsAs you move forward into more demanding machine learning algorithms where models are built and trained and then retrained over and over, you need a different platform balance that enables a shorter time to train. Intel Xeon Phi processorsOpens in a new window are a great platform choice for these higher-performance general-purpose machine learning solutions. They are optimized for HPC and scale-out, highly parallel, memory-intensive applications. With the integrated Intel® Omni-Path FabricOpens in a new window, these processors offer direct access to up to 400 GB of memory with no PCIe performance lag. They enable near linear scaling efficiency, resulting in lower time to train.Future generation processorsWhen you advance into the deep learning subset of machine learning, your workloads will have different requirements. For the fastest performance, you need a platform that is optimized for training deep learning algorithms that involve massive amounts of data. Our upcoming Intel® NervanaTM platform (codename Lake Crest)Opens in a new window has been developed specifically for this use case. This platform will deliver the first instance of the Nervana Engine coupled with the Intel Xeon processor. With its unprecedented compute density and high-bandwidth interconnect, this new platform will offer best-in-class neural network performance. We’re talking about an order of magnitude more of raw computing power compared to today’s state-of-the-art GPUs.Intel® Xeon® processors + FPGAOnce you have trained your models, you need a platform that can very efficiently inference using these trained neural networks. For example, you might have an application that classifies images based on its ability to recognize things in the images—such as different types of animals. The combination of Intel Xeon processors + FPGA (field-programmable gate array)Opens in a new window accelerators is uniquely suited for these sorts of inference workloads. It’s a customizable and programmable platform that offers low latency and flexible precision with high performance-per-watt for machine learning inference.Just the Beginning of AIHere’s the bottom line– If the competitiveness of your organization depends on your ability to leverage a wide range of AI solutions, you need more than a one-size-fits-all processing platform. You need all four of these Intel platforms for an end-to-end solution.Let’s close with a look to the future. While we have clearly made huge strides in advancing AI, we still have a long way to go. We need to ramp up the performance of machine learning algorithms to unprecedented levels. At Intel, we are firmly committed to this goal. Over the next three years, Intel aims to reduce the time required to train deep models by 100x in comparison to today’s GPU solutions. This goal was spelled out in a recent news releaseOpens in a new window in which Intel unveiled its AI strategy.In the meantime, while Intel pushes onward and upward, you can explore our Artificial Intelligence siteOpens in a new window for more information on the technologies outlined here.