A New Test Paradigm for Semiconductor Memories in the Nano-Era
More Info
expand_more
Abstract
Due to rapid and continuous technology scaling, faults in semiconductor memories (and ICs in general) are becoming pervasive and weak of nature; weak faults are faults that pass the test program (because they do not lead to erroneous behavior of the system). Nevertheless, they may cause a system failure during the application. This causes the number of escapes to increase while it becomes increasingly difficult to determine the nature of the failures. Components with weak faults which fail at board and system level are sent to suppliers, only to have them returned back as No Trouble Found (NTF). The conventional memory test approach assumes the presence of a single defect a time causing a strong fault (which leads to an error in the system), and therefore is unable to deal with weak faults. This thesis presents a new memory test approach able to detect weak faults; it is based on assuming the presence of multiple weak faults at a time in a memory system rather a single strong fault at a time. Being able to detect weak faults reduces the number of escapes, hence also the number of NTFs. The experimental analysis done using SPICE simulation for a case of study show e.g., that when assuming two simultaneous weak faults, the missing (defect) coverage can be reduced with up to 10% as compared with the conventional approach.