×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Intelligent Vision-Based System for Identifying Predators in Uganda: A Deep Learning Approach for Camera Trap Image Analysis

    This study presents an effective vision -based method to accurately identify predator species from camera trap images in protected Uganda areas. To address the challenges of object detection in natural environments, we propose a new multiphase deep learning architecture that combines extraction of various features with concentrated edge detection. Compared to previous approaches, our method offers 90.9% classification accuracy, significantly requiring fewer manual advertising training samples. Background pixels were systematically filtered to improve model performance under various environmental conditions. This work advances in both biology and computational vision, demonstrating an effective and data-oriented approach to automated wildlife monitoring that supports science -based conservation measures.

    Keywords: deep learning, camera trap, convolutional neural network, dataset, predator, kidepo national park, wildlife

  • Bidirectional Long Short-Term Memory Networks for Automated Source Code Generation

    This paper examines the application of Bidirectional Long Short-Term Memory (Bi-LSTM) networks in neural source code generation. The research analyses how Bi-LSTMs process sequential data bidirectionally, capturing contextual information from both past and future tokens to generate syntactically correct and semantically coherent code. A comprehensive analysis of model architectures is presented, including embedding mechanisms, network configurations, and output layers. The study details data preparation processes, focusing on tokenization techniques that balance vocabulary size with domain-specific terminology handling. Training methodologies, optimization algorithms, and evaluation metrics are discussed with comparative results across multiple programming languages. Despite promising outcomes, challenges remain in functional correctness and complex code structure generation. Future research directions include attention mechanisms, innovative architectures, and advanced training procedures.

    Keywords: code generation, deep learning, recurrent neural networks, transformers, tokenisation