Sub-Menu

Research

Hardware Based Energy Optimizations

Rajesh Gupta leads research in the architectural alternatives for supporting coprocessors in a general execution environment. He focuses specifically on designs placing coprocessors directly on the CPU-memory bus such that the coprocessor, in this case an array of FPGAs, operates in a memory-coherent manner with the main processing. This approach, called "coherent coprocessing" (CCP) allows for very fine grain application acceleration. A current application of this approach is applied to a compute intensive processing challenge in bioinformatics in the area of computational mass sprectrometry. Modifications to the MS-Alignment program are made to optimize its performance within and FPGA based coherent co-processing environment.

Raj Singh and Dan Sandin investigate the use of general purpose graphics processing units found today in increasing numbers in off the shelf high end graphics hardware to support compute intensive processing and advanced visualization. They apply GPGPU architectures and custom coding techniques to maximize processing performance gains in the processing and display of Quaternions, or four dimensional Julia sets rendered in 3D graphics space using a four pixel iterative computational technique.

Greg Hidley and a team of "DC Power Partners" investigate energy efficiencies achievable through the use of DC power distribution in a server facility. In a traditional server facility AC power is provided at a high voltage and converted to DC in the UPS system to charge batteries and condition the power. From there it is converted back to AC to drive the power supplies of computing equipment to run CPU, memory, disks, and communications components. Skipping or consolidating the above conversion steps can save considerable electricity usage overall in the power distribution chain and in cooling. This is demonstrated in a two rack DC powered system which drives a dozen servers with a DC power distribution chain.


Software Based Optimizations

Amin Vahdat investigates networking requirements for large scale data center architectures as well as virtualization techniques to increase the level of energy efficiency available in emerging data center designs. Virtualization services improved dramatically over the past decade and have now become pervasive within the service-delivery industry. Virtual machines are particularly attractive for server consolidation. However, while physical CPUs are frequently amenable to multiplexing, main memory is not. Thus, memory is often the primary bottleneck to increasing the degree of multiplexing in enterprise and data center settings. Vahdat's Difference Engine research enables virtual machine (VM) monitors to allocate more machine memory for VMs than is present in the system, by using aggressive memory sharing techniques. His research in mult-stage network switching addresses the need for balance of virtual machine processing and networked communications requirements.

Ingolf Krueger is exploring the applicability of domain modeling techniques to manage the complexity of an integrated architecture for the GreenLight instrument. Modeling plays an important role in all requirements engineering activities, serving as a common interface to domain analysis, requirements elicitation, specification, assessment, documentation, and evolution. Models can help in defining the questions for stakeholders and surfacing hidden requirements. Ultimately, the requirements have to be mapped to the precise specification of the system and the mapping should be kept up to date during the evolution of requirements or the architecture. One product of this activity is the GLIMPSE (GreenLight Infrastructure Management and Performance Evaluation) tool which allows access to GreenLight environmental information in a variety of visual format.


Visualization
Falko Kuester, Jurgen Schulze, and Tom DeFanti applied the resources of their Immersive Visualization Laboratory (IVL) to develop a virtual version of the Sun Modular data center and the GreenLight instrument. The purpose of this project is to be able to visualize sensor data like power consumption, temperature, etc. in an easy to access way in the high end visualization environments at Calit2, like the StarCAVE or the AESOP wall. The researchers have created a 3D model of the container, using as a basis a CAD model from Sun. The model is a complete replica of the container, which even allows the user to open the doors, enter, and pull out the computer racks, all by directly interacting with the visual components.

The user of this 3D model has access to the data available from the sensors in the container, in order to visually display the state of the systems within. The interactive 3D application now has the ability to view the state of the machines in the container from a remote location, without physically visiting it and potentially disrupting the measurements in it (for instance by opening the doors, thus allowing the cool air to leave the container).


Energy Monitoring and Modeling

Tajana Simunic Rosing leads research efforts to analyze the power, thermal and workload dynamics, and to reduce power consumption while mitigating the temperature induced problems in datacenter environments.  Her group has implemented and tested a number of reactive and proactive thermal management techniques and evaluated the benefits of optimizing temperature while minimizing energy, versus only focusing on lowering the system power/energy consumption.  She is also extending Xen, a virtual machine monitor for IA-32, IA-64, and PowerPC970 architectures, to enable power and thermal management within its virtualization technology. 
Claudiu Farcas has led efforts to develop an architecture for monitoring GreenLight energy and heat related information to help the correlation of energy expenditures of our experiments. For this purpose, his team identified sensor classes related to the monitoring instrumentation available in the GreenLight instrument. These classes group together sensors based on their interfacing capabilities, namely SCADA, SNMP/AVOCENT, IPMI, WS/XML. Orthogonally, sensors are grouped into classes based on their measurement capabilities, such as temperature, fan speeds, humidity, water flow, power (volts, amps, watts), etc. The intersection of these sets of classes helps us identify the correct mechanisms to retrieve the data from the sensors and understand their semantics, which are crucial for correlating different measurements from all around the GreenLight Instrument into a coherent dataset. All measurements are archived into a relational database for later analysis. Using these measurement capabilities, they have been working on means for tailoring and providing the collected data to various stakeholders of the project. For this purpose, they devised a set of web-based interfaces that provide various live or historical analysis capabilities. These interfaces were grouped under the GreenLight Infrastructure Management & Performance Evaluation (GLIMPSE) portal depicted visible at http://glimpse.calit2.net

  • Server Facilities Optimized for Renewable Energy Usage
Greg Hidley and a team of Canadian GreenStar researchers investigate tools to maximize the use of renewable energies for server facilities. One area of potential carbon footprint reduction in Information and Communication Technology (ICT) is through the use of renewable energy sources. Project GreenLight and Project GreenStar are collaborating to develop server environments capable of dynamic selection of server resources based on their current usage of renewable energy (something which can change rapidly as the sun sets, wind stops blowing or other factors affect the availability of renewable energy).

Application Successes

Tom DeFanti and Larry Smarr lead GreenLight activities related to multi-media distribution.
CineGrid [http://www.cinegrid.org] is a research community with the mission "To build an interdisciplinary community that is focused on the research, development, and demonstration of networked collaborative tools to enable the production, use, preservation and exchange of very-high quality digital media over photonic networks." Members of CineGrid are a mix of post-production facilities, media arts schools, research universities, scientific laboratories, and hardware/software developers around the world connected by one or more up to 10 Gbps networks. Of particular interest to GreenLight is the energy efficiency of next generation high definition mixed media development and collaboration environments. Our instrumented servers, network switches, display clusters and display environments offer ways to measure the storage, transmission, duplication, rendering and display of mixed media data at very high resolutions and frame rates. In collaboration with the CineGrid project, we are measuring and optimizing techniques for the above with a goal of maximizing work per watt.

  • Metagenomics
Trey Ideker leads activities to improve metagenomic work flows between multiple laboratories.  His team implements lab workflows was using BIOGEM which access data across a collection of GreenLight high performance storage resources. Data analyses performed on lab data include image analysis, base calling, and sequence analysis. Some of the common tools used are GAPipeline/CASAVA (developed by Illumina,), R, BioConductor (an R package), and RNA-Seq (R package). Lab goals include providing users with training and documentation on the above tools to help them make sense of the provided genomic data. They are working to improve efficiency of these storage devices using experiences gained in storage and data transfer research currently underway, to include protocol and various caching and storage optimizations.
Ingolf Krueger's group has been working on models and implementation for a Proteomics research platform to enable bioinformatics scientists to define specialized mass-spectrometry analysis workflows and execute them on the GreenLight Instrument. These models are relevant for inferring relationships between particular proteomics tools, resource utilization models, and energy utilization in response to their execution. The results drive an upcoming design for a computational mass spectrometry cyberinfrastructure by optimizing computing architectures in relation with the proteomics experiments intended to be executed (e.g., using general purpose CPUs vs. GPGPUs with specialized adaptations of the proteomics algorithms), the communication bandwidth required for transporting the data between the processing nodes, and centralized or distributed storage infrastructure (e.g., exploiting the data locality principles that apply to data intensive algorithms). His group is also incorporating FPGA augmented architectures into the infrastructure and are working on making them seamlessly available to proteomics experiments. We are also investigating a port of core proteomics algorithms to the GPGPU architecture, using nVidia CUDA framework on top of dual GTX295 cards

Greg Hidley and a team of Canadian GreenStar researchers investigate tools to maximize the use of renewable energies for server facilities. One area of potential carbon footprint reduction in Information and Communication Technology (ICT) is through the use of renewable energy sources. Project GreenLight and Project GreenStar are collaborating to develop server environments capable of dynamic selection of server resources based on their current usage of renewable energy (something which can change rapidly as the sun sets, wind stops blowing or other factors affect the availability of renewable energy).

King Abdullah University of Science and Technology (KAUST), Saudi Arabia is a new international, graduate-level research university dedicated to inspiring a new age of scientific achievement in the Kingdom that will also benefit the region and the world. KAUST is the realization of a decades-long vision of the Custodian of the Two Holy Mosques, King Abdullah Bin Abdulaziz Al Saud.
KAUST contracted with Calit2 CyberInfrastructure and Advanced Visualization teams led by Tom DeFanti at UCSD to design and deploy advanced IT and Visualization Showcase resources for their University, using best practices gleaned from similar UCSD projects, including GreenLight. As such many GreenLight experiences have contributed to the design for initial prototypes at KAUST and resulted in collaboration on a variety of energy saving technologies.

In Year 3 Project GreenLight extended collaborations to the HPWREN network in efforts to use GreenLight resources and lessons learned to improve performance and accessibility to regional observation data in support of emergency response activities. The challenge unique to Project GreenLight is to try and provide the above improvements while at the same time achieving a more energy efficient infrastructure (e.g. by decreasing watt per transaction).