mkassm/PSO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repoistory contains research about Particle Swarm Optimization (PSO) and it's implementation to optimize Artificial Neural Network (ANN)

  • Motivation

    • Gradient Descent requires differentiable activation function to calculate derivates making it slower than feedforward
    • To speed up backprop lot of memory is required to store activations
    • Backpropagation is strongly dependent on weights and biases initialization. A bad choice can lead to stagnation at local minima and so a suboptimal solution is found.
    • Backpropagation cannot make full use of parallelization due to its sequential nature
  • Advantages of PSO

    • PSO does not require a differentiable or continous function
    • PSO leads to better convergence and is less likely to get stuck at local minima
    • Faster on GPU
  • Windows 10
  • AMD GPU radeon 530
  • Python 3.9
  • matplotlib 3.3
  • numpy 1.19.5
  • scikit-learn 0.24
  • scipy 1.6.0
$ git clone https://.com/aboelkassem/PSO.git
$ cd PSO
$ python example_mnist.py

the following diagram and reports shows the performance of testing data of the dataset including 10 classes (digits classes)

About

Research about Particle Swarm Optimization (PSO) and it's implementation to optimize Artificial Neural Network (ANN)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published