
Nvidia Fundamentals of Accelerated Computing with CUDA C/C++ (FACCC)
Ziele der Schulung
This workshop teaches the fundamental tools and techniques for accelerating C/C++ applications to run on massively parallel GPUs with CUDA®. You’ll learn how to write code, configure code parallelization with CUDA, optimize memory migration between the CPU and GPU accelerator, and implement the workflow that you’ve learned on a new task—accelerating a fully functional, but CPU-only, particle simulator for observable massive performance gains. At the end of the workshop, you’ll have access to additional resources to create new GPU-accelerated applications on your own.
Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.
Zielgruppe Seminar
This course is designed for developers, engineers, and researchers who want to accelerate their applications using GPU programming with CUDA C/C++. It is ideal for professionals working in high-performance computing (HPC), scientific simulations, deep learning, and data analytics. Participants should have a basic understanding of C/C++ programming and an interest in learning how to leverage GPUs for performance optimization.
Voraussetzungen
- Basic knowledge of C or C++ programming.
- Familiarity with parallel computing concepts is helpful but not required.
- No prior experience with CUDA programming is necessary.
Lernmethodik
Die Schulung bietet Ihnen eine ausgewogene Mischung aus Theorie und Praxis in einer erstklassigen Lernumgebung. Profitieren Sie vom direkten Austausch mit unseren projekterfahrenen Trainern und anderen Teilnehmern, um Ihren Lernerfolg zu maximieren.
Seminarinhalt
Introduction
- Meet the instructor.
- Create an account at courses.nvidia.com/join
Accelerating Applications with CUDA C/C++
- Learn the essential syntax and concepts to be able to write GPU-enabled C/C++ applications with CUDA:
- Write, compile, and run GPU code.
- Control parallel thread hierarchy.
- Allocate and free memory for the GPU.
Managing Accelerated Application Memory with CUDA C/C++
- Learn the command-line profiler and CUDA-managed memory, focusing on observation-driven application improvements and a deep understanding of managed memory behavior:
- Profile CUDA code with the command-line profiler.
- Go deep on unified memory.
- Optimize unified memory management.
Asynchronous Streaming and Visual Profiling for Accelerated Applications with CUDA C/C++
- Identify opportunities for improved memory management and instruction-level parallelism:
- Profile CUDA code with NVIDIA Nsight Systems.
- Use concurrent CUDA streams.
Final Review
- Review key learnings and wrap up questions.
- Complete the assessment to earn a certificate.
- Take the workshop survey.
Hinweise
Partner
Dieses Seminar bieten wir in Kooperation mit unserem Nvidia Learning Partner Fast Lane Institute for Knowledge Transfer GmbH an.
Open Badge für dieses Seminar - Ihr digitaler Kompetenznachweis

Durch die erfolgreiche Teilnahme an einem Kurs bei IT-Schulungen.com erhalten Sie zusätzlich zu Ihrem Teilnehmerzertifikat ein digitales Open Badge (Zertifikat) – Ihren modernen Nachweis für erworbene Kompetenzen.
Ihr Open Badge ist jederzeit in Ihrem persönlichen und kostenfreien Mein IT-Schulungen.com-Konto verfügbar. Mit wenigen Klicks können Sie diesen digitalen Nachweis in sozialen Netzwerken teilen, um Ihre Expertise sichtbar zu machen und Ihr berufliches Profil gezielt zu stärken.
Übersicht: NVIDIA Schulungen Portfolio
Gesicherte Kurstermine
| 17.04.2026 | Virtual Classroom (online) | ||
| 21.05.2026 | Virtual Classroom (online) | ||
| 19.06.2026 | Virtual Classroom (online) | ||
| 31.07.2026 | Virtual Classroom (online) | ||
| 11.09.2026 | Virtual Classroom (online) | ||
| 16.10.2026 | Virtual Classroom (online) | ||
| 06.11.2026 | Virtual Classroom (online) | ||
| 04.12.2026 | Virtual Classroom (online) |



