z-logo
open-access-imgOpen Access
Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components
Author(s) -
Chris Walton,
George H. Gilmer,
Aaron P. Wemhoff,
Luis A. Zepeda-Ruiz
Publication year - 2007
Language(s) - English
Resource type - Reports
DOI - 10.2172/922114
Subject(s) - sputter deposition , sputtering , cavity magnetron , plasma , process (computing) , interpolation (computer graphics) , deposition (geology) , substrate (aquarium) , computer science , materials science , mechanical engineering , magnetic field , computational physics , thin film , engineering , physics , nanotechnology , paleontology , oceanography , quantum mechanics , frame (networking) , sediment , geology , biology , operating system
This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magnetic field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here