MARC 主機 00000nam  2200385   4500 
001    AAI3409154 
005    20110930095854.5 
008    110930s2010    ||||||||||||||||| ||eng d 
020    9781124063270 
035    (UMI)AAI3409154 
040    UMI|cUMI 
100 1  Gutstein, Steven Michael 
245 10 Transfer learning techniques for deep neural nets 
300    121 p 
500    Source: Dissertation Abstracts International, Volume: 71-
       07, Section: B, page: 4342 
500    Advisers: Olac Fuentes; Eric Freudenthal 
502    Thesis (Ph.D.)--The University of Texas at El Paso, 2010 
520    Inductive learners seek meaningful features within raw 
       input. Their purpose is to accurately categorize, explain 
       or extrapolate from this input. Relevant features for one 
       task are frequently relevant for related tasks. Reuse of 
       previously learned data features to help master new tasks 
       is known as 'transfer learning'. People use this technique
       to learn more quickly and easily. However, machine 
       learning tends to occur from scratch 
520    In this thesis, two machine learning techniques are 
       developed, that use transfer learning to achieve 
       significant accuracy for recognition tasks with extremely 
       small training sets and, occasionally, no task specific 
       training. These methods were developed for neural nets, 
       not only because neural nets are a well established 
       machine learning technique, but also because their 
       modularity makes them a promising candidate for transfer 
       learning. Specifically, an architecture known as a 
       convolutional neural net is used because it has a 
       modularity defined both by the fact that it is a deep net 
       and by its use of feature maps within each layer of the 
       net 
520    The first transfer learning method developed, structurally
       based transfer relies on the architecture of a neural net 
       to determine which nodes should or should not be 
       transferred. This represents an improvement over existing 
       techniques in terms of ease of use 
520    The second technique takes a very different approach to 
       the concept of training. Traditionally, neural nets are 
       trained to give specific outputs in response to specific 
       inputs. These outputs are arbitrarily chosen by the net's 
       trainers. However, even prior to training, the probability
       distribution of a net's output in response to a specific 
       input class is not uniform. The term inherent bias is 
       introduced to refer to a net's preferred response to a 
       given class of input, whether or not that response has 
       been trained into the net. The main focus of this work 
       will involve using inherent biases that have not been 
       trained into the net 
520    If a net has already been trained for one set of tasks, 
       then it's inherent bias may already provide a surprisingly
       high degree of accuracy for other, similar tasks that have
       not yet been encountered. Psychologists refer to this is 
       as latent learning. The accuracies obtainable in such a 
       manner are examined, as is the use of structurally based 
       transfer in conjunction with latent learning. These 
       methods provide significant recognition rates for very 
       small training sets 
590    School code: 0459 
650  4 Computer Science 
690    0984 
710 2  The University of Texas at El Paso.|bComputer Science 
773 0  |tDissertation Abstracts International|g71-07B 
856 40 |uhttp://pqdd.sinica.edu.tw/twdaoapp/servlet/
       advanced?query=3409154