High-Performance Computing in University Scientific Research Santillán Salazar Ruben Sandro FIEE- Facultad de Ingeniería Electrónica, Eléctrica y de Telecomunicaciones Universidad Nacional Mayor de San Marcos
The software programs that encoders write to run on supercomputers are divided into many smaller, independent computational tasks, called “sub-processes”, that can run simultaneously on these cores. For supercomputers to work effectively, their cores must be well designed to communicate data efficiently; for modern supercomputers, it may consist of more than 100,000 or more "cores" (For example, America's Titan Fig. 2, currently the world's second fastest supercomputer, contains just under 300,000 cores, which are capable of operating more than 6,000,000,000 threads simultaneously.
Abstract-- This document contains the proper concepts for the proper understanding of high-performance computing, intended to explain the influence of it on the environment of a professional career, as well as what it entails in student scientific research today.
I. INTRODUCTION: High-performance computing has become indispensable to the business environment, scientific researchers, undergraduate and graduate studies at universities, and government agencies to generate new discoveries and innovate cutting-edge products and services. That's why to take advantage of the capabilities of highperformance computing systems, it's essential to understand how they work and to adapt the design of programs to exploit their full potential. High-performance computing has transformed science, making enormous contributions in a variety of fields, including scientific research. In the following sections, this type of research will be best viewed from a student perspective. II. DEFINITION:
Fig. 1 Supercomputers.
High-performance computing involves the use of "supercomputers" in Fig. 1 and massive parallel processing techniques to solve complex computational problems through computer modeling, simulation, and data analysis. This type of computing brings together several technologies, including computer architecture, software and electronics, and application software under a single system to solve advanced problems quickly and effectively. While a common computer or workstation usually contains a single central processing unit or CPU, an HPC essentially represents a network of CPUs (e.g. microprocessors), each of which contains multiple computational cores, as well as its own local memory for running a wide range of software programs. [2]
Fig. 2 Titan supercomputer
1
III. RELEVANCE
2. Development of process planning algorithms aimed at asymmetric processors to optimize overall performance. 3. Analysis at different levels: operating system, compilers, programming techniques. 4. Cloud Computing. Basic software. Development of HPC applications (mainly big data). 5. Distributed intelligent systems of real time taking advantage of the computing power of the Cloud (Cloud Robotics). Fig. 5 6. Use of ABMS (Agent-Based Modeling and Simulation) [3] to develop an HPC Input/Output model [1] to predict how changes will be made to the different components of the model affect the functionality and performance of the system. 7. Optimization of parallel algorithms for control the behavior of multiple robots that work collaboratively, considering the distribution of its local' processing capacity and the coordination with the computing power and storage capacity (data and knowledge) of a Cloud.
The ability to leverage high-performance computing has become indispensable not only for advanced manufacturing industries, but also for a wide range of sectors. In fact, the use of high-performance computers has led to great advances in electronic design, content management and delivery, and the optimal development of power sources, among many others. In particular, high-performance computing enables breakthrough discoveries that fuel innovation and provides a cost-effective tool for accelerating the research and development process, as well as enabling advanced modeling, simulation and data analysis that can help solve manufacturing challenges and aid in decision making, optimize processes and design, improve quality, predict performance and failures, and accelerate or even eliminate prototyping and testing. As part of its importance, its own architecture Fig. 3 leads to the continuous search for greater efficiency due to technological advances. Therefore, it is necessary to investigate the different components of the aforementioned architecture.
. Fig. 4 FPGA Fig. 3 HPC architecture.
IV. RESEARCH AND DEVELOPMENT Beyond commercial applications, high-performance computing has transformed the scientific world. Research across numerous disciplines and has made enormous contributions in a number of fields of study, including ELECTRONIC ENGINEERING. The following section provides an overview of what is involved in this career. Among its lines of research and development in a university context we can find: 1. Study and characterization of architectures, clusters, grids, clouds, accelerators (GPU, FPGA, Xeon Phi) and hybrids. Fig. 4
Fig. 5 Cloud Robotics
.
2
VI. EXPECTED RESULTS OF THIS RESEARCH
VI. REFERENCES [1] Arquitecturas Multiprocesador en Computación de Alto Desempeño: Software, Métricas, Modelos y Aplicaciones De Giusti Armando, Tinetti Fernando, Naiouf Marcelo, Chichizola Franco, De Giusti Laura, Villagarcía Horacio, Montezanti Diego, Encinas Diego, Pousa Adrián, Rodriguez Ismael, Iglesias Luciano, Paniego Juan Manuel, Pi Puig Martín, Dell’Oso Matías, Mendez Mariano.
To study complex models, which integrate real-time sensor networks and computation parallel. Prediction strategies for catastrophes (floods, fires caused by example) are based on these models with high processing capacity and real-time signal monitoring. [1] Work is under way on techniques for recovery from multiple system level checkpoints, which serve as to ensure the correct completion of the scientific applications on systems of HPC, which are affected by the occurrence of external transient faults and integrating this solution with the random detection tools developed. [3] The development of applications linked to "Big Data", especially to solve in Cloud Computing y la optimization of parallel algorithms for control the behavior of multiple robots that work collaboratively, considering the distribution of its local processing capacity and the coordination with the computing power and storage capacity (data and knowledge) of a Cloud.
[2] The Vital Importance of HighPerformance Computing to U.S. Competitiveness by Stephen J. Ezell and Robert D. Atkinson [3] D. Encinas et al., “Modeling I/O System in HPC: An ABMS Approach”. The Seventh International Conference on Advances in System Simulation (SIMUL), ISBN: 978-1-61208-442-8, 2015
Fig. 6 HPC/Big Data
VI. CONCLUSIONS The HPC is helping humanity to solve some of its most difficult problems, the adoption and production of HPCs will continue to be vital for the industrial development of countries. Continued investment is needed to ensure its leadership in production and the deployment of state of the art HPC systems. Proactive policies should also be enacted to ensure that current and future HPC systems can be used in a variety of ways reap the benefits of using HPC systems. The HPC has been and will continue to be fundamental economic and industrial competitiveness.
3