18 September 2013 South Africa’s Kelvin van der Linde, one of the two youngest drivers in the competition, will go into the final round of the international VW Scirocco R-Cup in Hockenheim in Germany on 19 October sitting in first place in the Championship standings. He went into the event on the back of four wins and increased his lead in the standings by taking second place in Saturday’s opening race behind Denmark’s Kasper Jensen. ‘Everything went wrong’“Absolutely everything went wrong in Sunday’s race,” he added. “I wanted to play it safe because of the Cup title. Then I came into contact with another car and got the drive-through penalty. “All in all, I was too cautious and didn’t take enough risks. I’ll come back stronger in Hockenheim. I want to claim this title.” SAinfo reporter Commenting on his performances on the weekend afterwards, Van der Linde said: “At the beginning of Saturday’s race, I tried to attack Kasper. But then I took a conservative approach to the whole thing and stopped taking risks. After all, what I really want is to win the championship and it’s looking very positive on that front. Most recently, the 17-year-old was in action at Oschersleben in Germany over the weekend. Mid-race accidentHowever, a poor 15th place in race two on Sunday saw his lead reduced. Van der Linde began the race in seventh place on the grid and was aiming to put the Championship beyond doubt. A mid-race accident, when he was lying sixth and on course to wrap up that title, put paid to those hopes. With one race remaining and a maximum of 60 points on offer, Van der Linde leads the standings with 290 points to Jensen’s 247 and fellow South African Jordan Pepper’s 232. Instead, Van der Linde was handed a drive-through penalty after race officials deemed that he had pushed another competitor. Post-race viewing of the video footage showed that the other car had spun in front of Van der Linde and that contact was unintentional. A victory in the Volkswagen-backed series would almost certainly open doors to an international racing career for the young South African. What it means is that Van der Linde needs to finish the season’s last race in 12th place to win the world’s only single-make championship powered by natural gas.
The artificial intelligence (AI) taxonomy spans capabilities that enable systems to sense, reason, act and adapt. These capabilities stem from technologies for machine learning, including deep learning and classic machine learning, as well as the technologies for reasoning systems. In this post, I will focus on the hardware technologies for machine learning (ML). This is the area of AI that enables algorithms to learn from their experiences and improve their performance over time.AI TaxonomyThe machine learning algorithms that are at the heart of many AI solutions bring a unique set of technical challenges. For starters, these algorithms have high arithmetic density. Training a model with a billion parameters (moderately complex network) can take days unless properly optimized and scaled. Further, this process often needs to be repeated to experiment with different topologies to reach the desired level of inferencing accuracy. This process requires a huge amount of computational power.And then there is the data to think about. When you train a model, performance scales with the amount of data you feed into the model. For example, the performance of a speech recognition algorithm might improve greatly—to near human-level performance—if it is fed enough data (up to a point). Of course, this too requires significant amount of memory and computing capacity.Until recently, these challenges were a major blockade in the road to neural networks. While the mathematical concepts behind neural networks have been around for decades, until now we have lacked the combination of technologies required to accelerate the adoption of deep learning. That combination has two key components: enough compute to build sufficiently expressive networks, and enough data to train generalizable networks.Today, thanks to Moore’s LawOpens in a new window, the rapid digitization of content around us, and the accelerating pace of algorithmic innovation, we have overcome both of these challenges. We now have the compute and data we need for neural networks. A variety of processing platforms exist for executing deep learning workloads at the speeds required for AI solutions, even as the datasets consumed by the models grow larger and larger.Solutions for Every Business NeedWith recent acquisitions, Intel now offers four platforms for AI solutions. People sometimes ask me why we would need four platforms for AI. The answer is that different AI use cases have different platform requirements:Intel® Xeon® processorsWith machine learning and deep learning solutions, most of the processing time involves data management, such as bringing data into the system and cleaning it up. The compute time is a smaller part of the problem. This mix of needs—heavy on management, less so on compute—is best done on the Intel Xeon processor platform, the world’s most widely deployed machine learning platformOpens in a new window. Intel Xeon processors are optimized for a wide variety of data center workloads, enabling flexible data center infrastructure.Intel® Xeon Phi™ processorsAs you move forward into more demanding machine learning algorithms where models are built and trained and then retrained over and over, you need a different platform balance that enables a shorter time to train. Intel Xeon Phi processorsOpens in a new window are a great platform choice for these higher-performance general-purpose machine learning solutions. They are optimized for HPC and scale-out, highly parallel, memory-intensive applications. With the integrated Intel® Omni-Path FabricOpens in a new window, these processors offer direct access to up to 400 GB of memory with no PCIe performance lag. They enable near linear scaling efficiency, resulting in lower time to train.Future generation processorsWhen you advance into the deep learning subset of machine learning, your workloads will have different requirements. For the fastest performance, you need a platform that is optimized for training deep learning algorithms that involve massive amounts of data. Our upcoming Intel® NervanaTM platform (codename Lake Crest)Opens in a new window has been developed specifically for this use case. This platform will deliver the first instance of the Nervana Engine coupled with the Intel Xeon processor. With its unprecedented compute density and high-bandwidth interconnect, this new platform will offer best-in-class neural network performance. We’re talking about an order of magnitude more of raw computing power compared to today’s state-of-the-art GPUs.Intel® Xeon® processors + FPGAOnce you have trained your models, you need a platform that can very efficiently inference using these trained neural networks. For example, you might have an application that classifies images based on its ability to recognize things in the images—such as different types of animals. The combination of Intel Xeon processors + FPGA (field-programmable gate array)Opens in a new window accelerators is uniquely suited for these sorts of inference workloads. It’s a customizable and programmable platform that offers low latency and flexible precision with high performance-per-watt for machine learning inference.Just the Beginning of AIHere’s the bottom line– If the competitiveness of your organization depends on your ability to leverage a wide range of AI solutions, you need more than a one-size-fits-all processing platform. You need all four of these Intel platforms for an end-to-end solution.Let’s close with a look to the future. While we have clearly made huge strides in advancing AI, we still have a long way to go. We need to ramp up the performance of machine learning algorithms to unprecedented levels. At Intel, we are firmly committed to this goal. Over the next three years, Intel aims to reduce the time required to train deep models by 100x in comparison to today’s GPU solutions. This goal was spelled out in a recent news releaseOpens in a new window in which Intel unveiled its AI strategy.In the meantime, while Intel pushes onward and upward, you can explore our Artificial Intelligence siteOpens in a new window for more information on the technologies outlined here.
Panasonic has just launched a range of camcorders in the Indian market. The devices, which are part of its 4K Ultra HD lineup, are the HC-WX970 and the HC-VX870. The camcorders were first announced during CES in the month of January. While the HC-WX970 is priced at Rs 84,990, the HC-VX870 is priced at Rs 74,990. One of the only difference among the two gadgets is the integrated camera attached to the flip screen on the HC-WX970. Using this, one can shoot picture-in-picture recording of footage. Although both variants will feature Twin Camera functions, the feature can be used on the HC-VX870 on connecting a smartphone to it using Wi-Fi. The devices also include a rotatable sub camera, which can be used to take two different shots at different angles.The cameras run on a Crystal Engine 4K processor and hybrid 5-axis OIS system. Also the mic on the devices are capable of recording 5.1 channel surround sound. Both devices also sport 3-inch LCD screens and are powered by a 3.6V battery. The cameras also support 4K and high speed HD images. Both cameras support AVCHD and also come with an MOS image sensor, with more pixels for HD images and High-speed readout as well as reduced Rolling Shutter distortion. Other features on the camcorders include wind shield zoom microphone, night mode, Wi-Fi with NFC and narration mode to name a few.