Artificial Intelligence Term Paper

15 Research Paper Topics On Artificial Intelligence And Machine Learning

Artificial intelligence is perhaps of the same age as the emergence of digital computers. It showcases the ability of how a machine thinks and how that can be compared to the vagaries of the brain. The creation of robots and the way in which it can be improved is a phenomenal example of artificial intelligence.

It would be a great way to harmonize the heart and head. Here are 15 thinking research topics on artificial intelligence and machine learning if you will –

  1. Exponential enhancement of the ability to memorize and read
  2. – Find and write on a system that can enhance your capacity to read quickly and learn even more.
  3. The insertion of emotions in robots
  4. – The robots can do everything a man can; but it doesn’t emote or feel. Enquire whether such intervention has been made and how.
  5. The actual merit of artificial intelligence
  6. – Shed light on how it is used in wars.
  7. The impact of machine learning in today’s times
  8. – It can definitely help students manage time.
  9. The metaphysics of kernels
  10. – The kernels or the coconut shells hold the microcosm for life. Get into a research orientation.
  11. Is 3D printing the zenith of artificial intelligence
  12. – It certainly seems too good to be true.
  13. Why Japan is the Asian leader in deriving utilization from artificial intelligence
  14. – It is certainly a country of innovation and origami. The denouement is hardly surprising.
  15. How can dolphins exponentially raise the standard of their living
  16. – They are known to use 20 of their brains while men just use 2-3%. Do the math.
  17. Can Bayesian analysis now make life convenient
  18. – It is certainly possible in a subjective way, thanks to the emergence of supercomputers.
  19. The core of Deep Learning
  20. – This is a fantastic mechanism that is being augmented by the day.
  21. The role that chess can play
  22. – It is enticing to note that a mere 64 squares hold a world of possibilities.
  23. Creating a simulated world of artificial intelligence
  24. – Will the future breed cyborgs in the natural sense?
  25. The many striations of education technology
  26. – The manner in which it is reaching rural areas and touching lives is admirable.
  27. Is the concept of brainy aliens a tribute to artificial intelligence
  28. – It may not, but the manner in which we keep according powers to the aliens definitely is.
  29. Decoding the mechanism of dreams through scientific interpretations
  30. – Sigmund Feud showed the world the way psychologically. Now is time for some somatic thinking.

Artificial Intelligence research advances are transforming technology as we know it. The AI research community is solving some of the most technology problems related to software and hardware infrastructure, theory and algorithms. Interestingly, the field of AI AI research has drawn acolytes from the non-tech field as well. Case in point — prolific Hollywood actor Kristen Stewart’s highly publicized paper on Artificial Intelligence, originally published at Cornell University library’s open access site. Stewart co-authored the paper, titled “Bringing Impressionism to Life with Neural Style Transfer in Come Swim with the American poet and literary critic David Shapiro andAdobe Research EngineerBhautik Joshi.

Essentially, the AI-based paper talks about the style transfer techniques used in her short film Come Swim. However, Stewart’s detractors dismissed it as another “high-level case study.”

Meanwhile, the community is awash with ground-breaking research papers around AI.  Analytics India Magazine lists down the most cited scientific papers around AI, machine intelligence, andcomputer vision, that will give a perspective on the technology and its applications.

Most of thesepapers have been chosen on the basis of citation value for each. Some of thesepapers take into account a Highly Influential Citation count (HIC) and Citation Velocity (CV). Citation Velocity is the weighted average number of citations per year over the last 3 years.


A Computational Approach to Edge Detection: Originally published in 1986 and authored by John Canny this paper, on the computational approach to edge detection, has approximately 9724 citations. The success of this approach is defined by a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution.

Besides, the paper also presents a general method, called feature synthesis, for the fine-to-coarse integration ofinformation from operators at different scales. This helps in establishing the fact that edge detector performance improves considerably as the operator point spread function is extended along the edge.


A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: This research paper was co-written by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, Claude E. Shannon, and published in the year 1955. This summer research proposal defined the field, and has another first to its name — it is the first paper to use the term Artificial Intelligence. The proposal invited researchers to the Dartmouth conference, which is widely considered the “birth of AI”.


A Threshold Selection Method from Gray-Level Histograms: The paper was authored by Nobuyuki Otsu and published in 1979. It has received 7849 paper citations so far. Through this paper, Otsu discusses a nonparametric and unsupervised method of automatic threshold selection for picture segmentation.

The paper delves into how an optimal threshold is selected by the discriminant criterion to maximize the separability of the resultant classes in gray levels. The procedure utilizes only the zeroth- and first-order cumulative moments of the gray-level histogram. The method can be easily applied across multi threshold problems. The paper validates the method by presenting several experimental results.


Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift: This 2015 article was co-written by Sergey Ioffe and Christian Szegedy. The paper received 946 citationsand reflects on a HIC score of 56.

The paper talks about how training deep neural networks is complicated by the fact that the distribution of each layer’s inputs changes during training. This is a result of change in parameters of the previous layers. The phenomenon is termed as internal covariate shift. This issue is addressed by normalizing layer inputs.

Batch normalization achieves the same accuracy with 14 times fewertraining steps when applied to a state-of-the-art image classification model. In other words, Batch Normalization beats the original model by a significant margin.


Deep Residual Learning for Image Recognition: The 2016 paper was co-authored by Kaiming He, Xiangyu Zhang, and Shaoqing Ren. The paper has been cited 1436 times, reflecting on a HIC value of 137 and a CV of 582. The authors have delved into residual learning framework to ease the training of deep neural networks that are substantially deeper than those used previously.

Besides, theresearch paper explicitly reformulates the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. The research also delves into how comprehensive empirical evidence show that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.


Distinctive Image Features from Scale-Invariant Keypoints: This article was authored by David G. Lowe in 2004. The paper received 21528 citations  and explores the method for extracting distinctive invariant features from images. These can be utilized to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination.

The paper additionally delves into an approach which leverages these features for image recognition. This approach can help identify objects among clutter and occlusion while achieving nearreal-time performance.


Dropout: a simple way to prevent neural networks from overfitting: The 2014 paper was co-authored by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. The paper has been cited around 2084 times, with a HIC and CV value of 142 and 536 respectively. Deepneural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks.

The central premise of the paper is to drop units (along with their connections) from the neural network during training, thus preventing units from co-adapting too much. This helps in significantly reducing overfitting, while furnishing major improvements over other regularization methods.


Induction of decision trees: Authored by J. R. Quinlan, this scientific paper was originally published in 1986 and summarizes an approach to synthesizing decision trees that has been used in a variety of systems. The paper specifically describes one such system, ID3, in detail. Additionally, the paper discusses a reported shortcoming of the basicalgorithm, besides comparing the two methods of overcoming it. To conclude the paper, the author presents illustrations of current research directions.


Large-Scale Video Classification with Convolutional Neural Networks : This 2014 paper was co-written by 6 authors, Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. The paper has been cited over 865 times, and reflects on a HIC score of 24, and a CV of 239.

Convolutional Neural Networks (CNNs) are proven to stand as a powerful class of models for image recognition problems. These results encouraged the authors to provide an extensive empirical evaluation of CNNs on large-scale video classification. This was accomplished using a new dataset of 1 million YouTube videos belonging to 487 classes.


Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference: The paper was published in 1988. Judea Pearl is the author to this article. The paper presents a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty.

Pearl furnishes a provides a coherent explication of probability as alanguage for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic.

Over 100,000 people subscribe to our newsletter.

See stories of Analytics and AI in your inbox.


Provide your comments below


0 thoughts on “Artificial Intelligence Term Paper

Leave a Reply

Your email address will not be published. Required fields are marked *