Skip to main navigation menu Skip to main content Skip to site footer

Peer Reviewed Article

Vol. 6 (2019)

P-SVM Gene Selection for Automated Microarray Categorization

Published
2019-02-15

Abstract

When it comes to computer visioning, the success rates of Convolutional Neural Networks—which are also referred to as CNNs—is majorly controlled as well as accelerated by the strength of their conductive bias. It is strong to the significance of enabling the said type of neural networks able to proffer effective solutions to visioning-associated assignments that come with indefinite weights. That also means CNNs can do the just said without having to go through training. In semblance to this, Long Short-Term Memory—also known as LSTM—possesses a strength-filled inductive unfairness when it comes to preserving raw data over a stretched period of time. Nevertheless, a good number of real-life networks are under the governance of preservation policies, culminating in the re-supply of specific amounts—in the economical and physical systems, for instance. Our first-ever Mass-Conserving LSTM, which can also be called the MC-LSTM, is in adherence to these laws of conservation. It does so by creating an extension to the inductive unfairness on the part of the LTSM, a medium through which the MC-LSTM approach designs the redistribution of those preserved quantities. A cutting-edge introduction, it is designed for neural arithmetic systems for training operations in the arithmetic dimension. Those operations could include additional assignments, which possesses a substantial preservation policy because the total remains constant regardless of time. Additionally, MC-LSTM is implemented into traffic prediction, creating the design for a damped pendulum and a standard set of hydrological data—wherein a state-of-the-art is set for the forecast of apex-level flows. For hydrological purposes, this paper also demonstrates that the MC-LSTM states are in correlation with the real-life procedures, thus making them subject to interpretation.

References

  1. Addor, N., Newman, A. J., Mizukami, N., and Clark, M. P. Catchment attributes for large-sample studies. Boulder, CO: UCAR/NCAR, 2017b.
  2. Addor, N., Newman, A. J., Mizukami, N., and Clark, M. P. The camels’ data set: catchment attributes and meteorology for large-sample studies. Hydrology and Earth System Sciences (HESS), 21(10):5293–5313, 2017a.
  3. Anderson, E. A. National weather Service River forecast system: Snow accumulation and ablation model. NOAA Tech. Memo. NWS HYDRO-17, 87 pp., 1973.
  4. Arjovsky, M., Shah, A., and Bengio, Y. Unitary evolution recurrent neural networks. In Proceedings of the 33rd International Conference on Machine Learning, volume 48, pp. 1120–1128. PMLR, 2016.
  5. Awiszus, M. and Rosenhahn, B. Markov chain neural networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2261–22617, 2018.
  6. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  7. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):1–46, 2015.
  8. Bengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, volume 28, pp. 1171–1179. Curran Associates, Inc., 2015.
  9. Bynagari, N. B. (2016). Industrial Application of Internet of Things. Asia Pacific Journal of Energy and Environment, 3(2), 75-82. https://doi.org/10.18034/apjee.v3i2.576
  10. Bynagari, N. B. (2017). Prediction of Human Population Responses to Toxic Compounds by a Collaborative Competition. Asian Journal of Humanity, Art and Literature, 4(2), 147-156. https://doi.org/10.18034/ajhal.v4i2.577
  11. Bynagari, N. B. (2018). On the ChEMBL Platform, a Large-scale Evaluation of Machine Learning Algorithms for Drug Target Prediction. Asian Journal of Applied Science and Engineering, 7, 53–64. Retrieved from https://upright.pub/index.php/ajase/article/view/31
  12. Bynagari, N. B., & Fadziso, T. (2018). Theoretical Approaches of Machine Learning to Schizophrenia. Engineering International, 6(2), 155-168. https://doi.org/10.18034/ei.v6i2.568
  13. Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1724–1734. Association for Computational Linguistics, 2014.
  14. Cohen, N. and Shashua, A. Inductive bias of deep convolutional networks through pooling geometry. In the International Conference on Learning Representations, 2017.
  15. Deco, G. and Brauer, W. Nonlinear higher-order statistical decorrelation by volume-conserving neural architectures. Neural Networks, 8(4):525–535, 1995. ISSN 0893-6080.
  16. Donepudi, P. K. (2014). Technology Growth in Shipping Industry: An Overview. American Journal of Trade and Policy, 1(3), 137-142. https://doi.org/10.18034/ajtp.v1i3.503
  17. Donepudi, P. K. (2015). Crossing Point of Artificial Intelligence in Cybersecurity. American Journal of Trade and Policy, 2(3), 121-128. https://doi.org/10.18034/ajtp.v2i3.493
  18. Donepudi, P. K. (2016). Influence of Cloud Computing in Business: Are They Robust?. Asian Journal of Applied Science and Engineering, 5(3), 193-196. Retrieved from https://journals.abc.us.org/index.php/ajase/article/view/1181
  19. Donepudi, P. K. (2017). Machine Learning and Artificial Intelligence in Banking. Engineering International, 5(2), 83-86. https://doi.org/10.18034/ei.v5i2.490
  20. Donepudi, P. K. (2018). Application of Artificial Intelligence in Automation Industry. Asian Journal of Applied Science and Engineering, 7, 7–20. Retrieved from https://upright.pub/index.php/ajase/article/view/23
  21. Fadziso, T., & Manavalan, M. (2017). Identical by Descent (IBD): Investigation of the Genetic Ties between Africans, Denisovans, and Neandertals. Asian Journal of Humanity, Art and Literature, 4(2), 157-170. https://doi.org/10.18034/ajhal.v4i2.582
  22. Freeze, R. A. and Harlan, R. Blueprint for a physically based, digitally-simulated hydrologic response model. Journal of Hydrology, 9(3):237–258, 1969.
  23. Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193–202, 1980.
  24. Gallistel, C. R. Finding numbers in the brain. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1740), 2018. doi: 10.1098/rstb.2017.0119.
  25. Gers, F. A. and Schmidhuber, J. Recurrent nets that time and count. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, volume 3, pp. 189–194. IEEE, 2000.
  26. Gers, F. A., Schmidhuber, J., and Cummins, F. Learning to forget: Continual prediction with lstm. Neural Computation, 12(10):2451–2471, 2000.
  27. Kratzert, F., Klotz, D., Brenner, C., Schulz, K., and Herrnegger, M. Rainfall–runoff modelling using long short-term memory (lstm) networks. Hydrology and Earth System Sciences, 22(11):6005–6022, 2018.
  28. Manavalan, M. (2018). Do Internals of Neural Networks Make Sense in the Context of Hydrology?. Asian Journal of Applied Science and Engineering, 7, 75–84. Retrieved from https://upright.pub/index.php/ajase/article/view/41
  29. Manavalan, M., & Bynagari, N. B. (2015). A Single Long Short-Term Memory Network can Predict Rainfall-Runoff at Multiple Timescales. International Journal of Reciprocal Symmetry and Physical Sciences, 2, 1–7. Retrieved from https://upright.pub/index.php/ijrsps/article/view/39
  30. Manavalan, M., & Donepudi, P. K. (2016). A Sample-based Criterion for Unsupervised Learning of Complex Models beyond Maximum Likelihood and Density Estimation. ABC Journal of Advanced Research, 5(2), 123-130. https://doi.org/10.18034/abcjar.v5i2.581
  31. Manavalan, M., & Ganapathy, A. (2014). Reinforcement Learning in Robotics. Engineering International, 2(2), 113-124. https://doi.org/10.18034/ei.v2i2.572
  32. Nam, D. H. and Drew, D. R. Traffic dynamics: Method for estimating freeway travel times in real time from flow measurements. Journal of Transportation Engineering, 122(3):185–191, 1996.
  33. Neogy, T. K., & Bynagari, N. B. (2018). Gradient Descent is a Technique for Learning to Learn. Asian Journal of Humanity, Art and Literature, 5(2), 145-156. https://doi.org/10.18034/ajhal.v5i2.578
  34. Vanajakshi, L. and Rilett, L. Loop detector data diagnostics based on conservation-of-vehicles principle. Transportation research record, 1870(1):162–169, 2004.
  35. Yilmaz, K. K., Gupta, H. V., and Wagener, T. A process based diagnostic approach to model evaluation: Application to the nws distributed hydrologic model. Water Resources Research, 44(9):1–18, 2008. ISSN 00431397.
  36. Zhao, Z., Chen, W., Wu, X., Chen, P. C., and Liu, J. Lstm network: a deep learning approach for short-term traffic forecast IET Intelligent Transport Systems, 11(2):68–75, 2017.

Similar Articles

You may also start an advanced similarity search for this article.