Search In this Thesis
   Search In this Thesis  
العنوان
Architecture of neural networks using genetic algorithms /
المؤلف
Farag, Waleed Ezzat.
هيئة الاعداد
باحث / وليد عزت فرج
مشرف / ابراهيم زيدان
مشرف / مصطفى صيام
مشرف / محمود عبدالله
الموضوع
Neural networks (Computer science) - Design and construction. Genetic algorithms.
تاريخ النشر
1997.
عدد الصفحات
165 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
1/1/1997
مكان الإجازة
جامعة الزقازيق - كلية الهندسة - اتصالات والكترونيات
الفهرس
Only 14 pages are availabe for public view

from 170

from 170

Abstract

Neural computing is one of the most rapidly expanding areas of current research, attracting researchers from a wide variety of disciplines. The main cause of this popularity is that the approach of neural computing tries to capture the guiding principles that underly the brain solution to complex problems and applies them to computer systems. The above reason makes artificial neural networks a very appealing choice in solving a diverse number of complex problems in which other techniques fail to provide satisfactory solutions.
One of the most significant development in the area of neural networks is the backpropagation algorithm. But the use of backpropagation algorithm to train feedforward neural networks assumes that the structure of the networks and the values of learning rule parameters are predetermined before starting the training of the network. In most cases there is no simple method to predetermine the optimal structures and the best learning parameters. The choice of these values is usually done by trial-and-error method of different combinations of the parameter values until a desired performance is obtained. However, this is a tedious and computationally expensive process.
This thesis introduces a new algorithm in order to select systematically the proper network architectures. The new algorithm was developed by integrating a genetic algorithm with a momentum backpropagation learning algorithm. The genetic algorithm is used to optimize network architectures and the backpropagation algorithm is used to train networks and generate connection weights. The developed algorithm is used to alleviate the burden of predetermining the network architecture and learning rule parameters by automatically searching the space of possible network architectures. The best network architecture is evolved with the associated learning rule parameters for the specified problem based on a number of performance criteria that can be controlled to prefer one to another.
The proposed algorithm starts by generating random network architectures, then the genetic algorithm is used to extract and combine best features of these networks to produce better ones. The process of extracting best features is guided by the evaluation of each structure which in turn depends mainly on the results of training and testing of network structures using backpropagation as a learning algorithm.
The efficiency of the system was of main concern, where a simple encoding scheme was used in order to reduce the size of the search space and thus increase the speed of the system. Also small chromosomes (that are able to encode most of promising architectures and learning parameters) were used to optimize the amount of computer memory utilized.
Diverse types of applications were used to evaluate the performance of the proposed algorithm. These applications are the XOR problem, binary addition of two 2-bit numbers, the odd parity problem, classification of bit-map characters, function approximation problem, and fmally classification of a certain pattern distribution. The results obtained are very good in terms of network architectures and learning rule parameters. In most cases the algorithm generates optimal or near optimal structures for neural networks that can solve the given problem.
Detailed comparisons of the results obtained by using the proposed algorithm in the previously mentioned applications and those obtained using the trial-and- error method were presented for each problem. These comparisons illustrate the effectiveness of the proposed system in terms of efficient use of computational resources and minimization of efforts required to design a neural network for a given application. These comparisons also illustrated the superior performance of the proposed system in developing the optimal or near optimal network structure in a stylized way.