Site Title Gujarat Techno
  • Home
  • Our Work
  • Code
  • Welcome

Our Work

A Fast Learning Algorithm for Rainfall Prediction

A PC based application is developed using 51 years of Indian rainfall data for long range prediction of average rain fall. This learning algorithm iteratively estimates 96 coefficients of a 5th order polynomial in few minutes. Proposed prediction model is based on modelling of time series rainfall data using 5th order non-linear predicting code. Steepest descent algorithm is used for extraction of appropriate coefficient of rainfall time series data. These coefficients are reinforced in model learning process. The rainfall data of 1960 to 2010 is used for the development of the model. Model has been tested on rainfall data for different training sets. The proposed model is capable of forecasting yearly rainfall 1 year in advance. Rainfall estimation accuracy is above 85%. (Year:2014)

Rainfall Prediction using Neural Net based Frequency Analysis Approach

Rainfall prediction is very complex hydrologic process and is important as it holds the key to any countries’ economy. Proposed model presents a new approach for yearly rainfall prediction of 30 Indian subdivisions. Yearly rainfall data of the Indian subdivision is available from IITM, Pune. The combination of Fast Fourier Transform (FFT) and Feed Forward Neural Network (FFNN) is applied for next one year rainfall prediction. Fast Fourier transform with filtering is performed on interpolated rainfall data to separate periodic components. These periodic components and delayed periodic components are given as input and desired output respectively to an FFNN for training. While testing the output of FFNN, inverse FFT gives the predicted rainfall value by amount of training input-output delay. This model is tested with 140 year’s Indian subdivisions rainfall data. The experimental results of 30 subdivisions show that next one year rainfall prediction accuracy is above 92%. (Year:2013)

A regenerative prediction algorithm for Indian rainfall prediction

Rainfall forecasting is critical for the crop planning and water management strategies. Proposed study presents a novel approach for modelling time series precipitation data. The 51 years of Indian rainfall data is used for the development of the model. We use nonlinear predictive code based on 11th order with 240 coefficients. Coefficients are optimized using gradient descendent algorithm. Algorithm is tested using 40 years of rainfall training data. Prediction error tested outside training period is found less than1% for few months. Prediction period is extended to one year by including progressive predicted values in input samples using regenerative feedback algorithm. This model is applied for different training and testing periods with average error of 2% to 10%. (Year:2013)

Earth Rock Classification Using Neural Network

Objective of this paper is to present a method for Rock Classification system using microscopic imaging of surface parameters. The development in the area of computational intelligence has opened new opportunities in the area of image processing. In this paper we demonstrate similar example of earth rock classification. This consists of feature extraction using wavelet based data compression and neural net based feature classification. Classification accuracy is further improved using multi-parameter analysis of different surface parameters. Rock surface parameters used in this work are color, texture and grain.
The combined signature extracted from each of these parameters is used to identify the rock type. The project is developed under the Planetary Exploration Technology Research project. (Year:2013)

Single Layer Neural Network Solution for XOR Problem

An interactive Neural Network developing Tool Box is designed using C++. The Neural Networks are emerging computational technique used for artificial intelligence applications. Neural Networks are used for speech and image recognition, feature extraction, and associative memory applications. For simulation of Neural Network OOPs (Object Oriented Programming) are found most suitable programming language. The paper describes in general the Neural Network optimizing algorithms used in above implementation illustrated with results. (Year:2000)

A novel algorithm for simulation of a (Two legged) Robot using Neural Network

Multi layered neural networks are being applied in various fields of automation. One such application is developed for simulation of two legged Robot to balance and walk using Neural Network based learning technique. Human limbs are simulated and a skeleton of the human body is designed using computer graphics. A rule based Neural Network Algorithm is developed to motivate the skeleton to walk. The motivation is created using a set of rules and an error functions to achieve the set goals. Some of the goals are like- keeping the Center of Gravity (CG) under the feet, keeping the average height of the CG constant, spending minimum energy, traveling in a given direction etc.. Using this algorithm, the seven limb joints of the skeleton are updated and the limbs are plotted iteratively to produce an animation effect of walking. The ROBOT model is only inputted with the destination co-ordinates, the actual movements of the limbs are generated automatically similar to human walking. The algorithm uses a technique similar to the gradient descendent method to minimize the error. Such algorithm may be applied to solve transportation problems in non-uniform space using multi legged ( wheel-less) vehicles. (Year:1996)

A Learning Algorithm for Self Organized Multilayered Neural Network

Feed Forward type multilayered Neural Networks are very popular for their generalization and feature extraction property. However, the Feed Forward type network needs supervised learning to train the network initially. Self-organizing networks on the other hand are used in application of image classification, speech recognition, language translation etc. An algorithm is developed to train multilayered Feed Forward type Network in both supervised and unsupervised mode. Unsupervised learning is achieved by enhancing the maximum output and de clustering the crowded classes simultaneously. Proposed algorithm allows forcing any output to learn a desired class using supervised training like Back Propagation. The Network uses multilayered architecture and hence it classifies the features. Population of each class could be controlled with greater flexibility. (Year:1996)

A DSP based Low Precision Algorithm for Neural Network using Dynamic Neuron Activation Function

Most of the neural network applications are based on computer simulation of back prorogation (BP). The BP algorithm uses floating-point arithmetic which is computational intensive and hence slow on small machines. An integer algorithm similar to BP algorithm is developed, which is suitable for DSP (Digital Signal Processor) devices and micro controllers. The algorithm uses integer neuron activation function with bounded output. The training of the network involves the training or the modification of the activation function coefficients. This results in reliable convergence of error with increased speed and less local minima problems. Using this algorithm, the error convergence rate is found better than the conventional B.P algorithm for analog and digital problems. This algorithm is implemented on DSP based neural hardware. (Year:1996)

A Neural Network Tool Box using C++

An interactive Neural Network developing Tool Box is designed using C++. The Neural Networks are emerging computational technique used for artificial intelligence applications. Neural Networks are used for speech and image recognition, feature extraction, and associative memory applications. For simulation of Neural Network OOPs (Object Oriented Programming) are found most suitable programming language. The paper describes in general the Neural Network optimizing algorithms used in above implementation illustrated with results. (Year:1995)

A multilayered feed forward neural network suitable for VLSI implementation

A potentially simplified training strategy for feed forward type neural networks is developed in view of VLSI implementation. The gradient descent back propagation technique is simplified to train stochastic type neural hardware. The proposed learning algorithm uses ADD, SUBTRACT and LOGICAL operations only. This reduces circuit complexity with an increase in speed. The forward and reverse characteristics of perceptrons are generated using random threshold logic. The proposed hardware consists of 31 perceptrons per layer working in parallel with a programmable number of layers working in sequential mode. (Year:1995)

Application of artificial neural networks in hydrological modeling: a case study of runoff simulation of a Himalayan glacier basin

The simulation of runoff from a Himalayan glacier basin using artificial neural network (ANN) is presented. The performance of ANN model is found superior compared to the energy balance model and the multiple regression model. The ANN is faster in learning and exhibits excellent system generalization characteristics. (Year: 1993)

Antigen Prediction Using Neural Network Based On Tri-Peptide Markers

The protein sequence plays an important role to understand the function and feature of protein. Antigen prediction from the huge amount of protein primary sequence is a challenging problem. A novel approach is proposed here to characterize an antigen sequence using a set of features which can describe characteristics of target antigen group. The proposed system uses a combination of evolutionary algorithm and proposed ordering algorithm to identify the set features. Here the features are the tri-peptides. An algorithm ensures that the unique combination of the tri-peptide separates target antigen sequence from other protein sequences. We have preprocessed the datasets of Plasmodium Falciparum, Leptospira Interrogans, Pseudomonas Aeruginosa, Streptococcus Pneumonia and Bacillus Thuringiensis to use it in our system. These datasets are extracted from Uniref100 protein sequence database which contains 83 million records. Neural networks are trained using training sets from all species and results are compared here. A prediction result gives 98 % of peak accuracy for P.Falciparum using identified tri-peptide features which are tested on the test dataset. (Year: 2017)

Peptide Markers based Prediction of Antigen Sequence using Neural Network

Bioinformatics has witnessed considerable progression in recent years; the prediction of antigen sequence in big data environment still remains challenging. A novel approach is proposed here to generate and evaluate tri-peptide markers, where a combination of high frequency tri-peptides can signify a characteristic of target antigen sequence. A dataset of Plasmodium falciparum antigen sequences is extracted from benchmark uniref100 protein sequence database; Training and test set are generated from extracted P. falciparum dataset. Genetic Algorithm (GA) is used here to identify an optimal set of tri-peptide markers from training set. Through different generations of GA, markers are evaluated using approximate selection function. A total 100 tripeptides are identified using GA and the rest 150 are extracted by examining fitness function using iterative convergence algorithm. A back propagation neural network is trained to predict target antigen sequences using selected tri-peptide markers. The algorithm is tested on a test set which is non-inclusive in training set and the prediction result obtained shows 93% accuracy. This algorithm can also be useful to synthesis new sequence as possible drug antigen for given target protein. (Year: 2017)

Simulation And 3d Visualization Of Complex Molecular Structure For Study Of Protein And Nano Materials

Simulation and visualization of complex bio-molecules are gaining importance; this is because of the fact that functional properties of such molecules are more dependent on their 3D structures. One of the challenges of computational biology is prediction of structures of protein from amino sequence. Three dimensional visualization of molecular structure is also an important research goal in nano engineering. Such visualization, simulation and animation of reaction dynamics are essential for modern chemistry. Force field simulation of large number of atoms in complex molecular structure is computationally challenging. To find equilibrium of force fields and resultant dynamic structure of large atomic clusters which assembles to predictable molecules is one of the major goals of computational biology. In this paper we describe a simple but efficient algorithm (MoliSim3D) to simulate and dynamically visualize in 3D the assembly of atoms in presence of internal and external forces. It helps monitoring and measuring different bond angles, dimensions and displacements of selected portions from thousands of atoms forming target molecule. Pattern recognition, pattern error detection and reaction control are advance tools of virtual molecular assembly. The proposed system is basically intended as educational tool and to help researchers in understanding complex molecular dynamics and functionality with high degree of confidence in simulated environment. (Year:2015)

Knowledge base and neural network approach for protein secondary structure prediction

Protein structure prediction is of great relevance given the abundant genomic and proteomic data generated by the genome sequencing projects. Protein secondary structure prediction is addressed as a sub task in determining the protein tertiary structure and function. In this paper, a novel algorithm, KB-PROSSP-NN, which is a combination of knowledge base and modeling of the exceptions in the knowledge base using neural networks for protein secondary structure prediction (PSSP), is proposed. The knowledge base is derived from a proteomic sequence-structure database and consists of the statistics of association between the 5-residue words and corresponding secondary structure. The predicted results obtained using knowledge base are refined with a Backpropogation neural network algorithm. Neural net models the exceptions of the knowledge base. The Q3 accuracy of 90% and 82% is achieved on the RS126 and CB396 test sets respectively which suggest improvement over existing state of art methods. (Year:2014)

Extracting database properties for sequence alignment and secondary structure prediction

A plethora of continuously increasing data exists in genomic and proteomic domains. Computational tools are of vital importance for research in these areas. Biologists, who are involved in identifying new sequences or genes would like to compare their findings with the existing data sets locally. In this paper, we present a set of utilities that can help the researchers to conveniently extract the fields of interest from the public protein databases. UniRef100 is a large comprehensive set of unique, non-redundant protein sequences. The utilities described are used to index, sort, access the records randomly, and extract the properties of UniRef100 database. The properties derived are used for creating a synthetic bio-random database for further research in sequence analysis and secondary structure prediction. (Year:2014)

Keyword based Iterative Approach to Multiple Sequence Alignment

In many research applications large number of similar looking peptide sequences needs to be analyzed for study of small differences using visual alignment technique called Multiple Sequence Alignment. For better understanding of proteins and their functions, it is necessary to align the strong bonds of each sequence and observe the changes in weak bonds. Multiple sequence alignment identifies and quantifies similarities and differences among several proteins visually or graphically. The dissimilarities in multiple sequences can be due to evolutionary processes such as mutation, insertion or deletion of amino acid residues. In multiple sequence alignment, most of the technique uses pair wise alignment method which is time consuming and computationally intensive. Performance of the algorithm presented here is found more efficient compared to recently reported techniques. (Year:2014)

Protein Sequence Similarity Search Suitable for Parallel Implementation

Having entered the post genomic era, there lies a plethora of information, both genomic and proteomic. This provides quite a lot of resources so that the computational and machine learning strategies be applied to address the problems of biological relevance. Searching in biological databases for similar or homologous sequences is a fundamental step for many bioinformatics tasks. On discovery of a new protein sequence or drug, a biologist would like to confirm the discovery by comparing with the largest available protein database. Alignment based methods become too complex and time consuming with the increase in the number of sequences. Alignment free sequence comparison is many a time used as a filtering step for application of alignment. A novel method of searching for similar sequences in a huge protein database is proposed. The method has two interesting aspects. One is the divide and conquer approach and use of hashing like scheme for indexing the large database. The index consists of the addresses of the 15-residue words in the UniRef100.fasta database. The second aspect is the possibility of data parallelism as the database is divided into m segments for indexing. This can further increase the efficiency of the algorithm. The creation of index is time consuming but the search time is constant and affordable. The method is particularly useful when used with the large databases like UniRef100.fasta which consists of 9757328 protein sequences as on May 2010. The index based searching algorithm is implemented in C # .NET. (Year:2012)

Similarity search using pre-search in UniRef100 database

Sequence similarity in biological databases is used to characterize a newly discovered protein and confirming the existence of its homologs. This is often computationally very expensive. We have implemented a new algorithm that performs sequence similarity search using a pre-search phase. The proposed algorithm works in three phases. As a pre-preparation for Pre-Search, we locate a sequence, similar to the query sequence to extract all common words between the former and the latter. In the second phase, the pre-search phase, we locate all sequenes containing any of the randomly chosen common words. The list is further scanned in the third phase and the results obtained from the second phase are refined using Similarity Search (SS) algorithm, described in the paper. We have preprocessed the Uniref100.FASTA protein database containing 9,757,328 records downloaded from uniprot.org, to suit our application of sequence similarity search. The algorithm is simple and can be applied in various perspectives. These include searching in DNA and protein sequence databases, motif finding, and gene identification search. Pre-Search reduces the search space using much faster simpler algorithm. In large database search, its effect could be phenomenal. (Year:2011)

Utilities for Efficient Usage of Large Biological Database

We have been witnessing a meticulous expansion in the amount of biological databases as an outcome of the human genome-sequencing project. These biological databases are created and updated by the inventions of new molecules by the biologists. The nature of most of these databases is either non-structured or semi-structured. The data are stored in a flat file, which makes it difficult to retrieve a particular record in reasonable time. Computational biology tasks such as multiple sequence alignment, sequence similarity, motif finding, and structure prediction have yanked many researchers. We feel that computational biologists are many a time not interested in all the fields present in the database. Rather, they are concerned about particular fields depending upon the issue being addressed. We have developed utilities to extract and index UniRef100 database for fast sequential and indexed random access, to normalize occurrences of pairs, trios and quads substrings of amino acids in the database, a programmatically mutated database to test the sequence similarity algorithms. This work shall aid the upcoming researchers in the field of computational biology to customize existing database for the algorithmic needs to accelerate the operations.
Index Terms—UniRef100 protein database, customized database, substring frequency, sequence similarity, structure prediction. (Year:2010)

Universal Share for Multisecret Image Sharing Scheme Based on Boolean Operation

Conventional secret sharing schemes require complex computation. Visual secret sharing (VSS) scheme encrypts a secret into two or more meaningless images, called as shares. Shares are stacked together to decrypt a secret using human visual system. In recent years, concept of universal share is introduced to share multiple gray images. A company organizer uses this unique share to recover multiple images. However, utilization of complex numerical computation makes such system inappropriate for VSS. In this letter, we overcome this complexity issue and propose a Boolean-based light-weight computation scheme to share multiple secrets using universal share. The proposed Boolean-based multisecret sharing (MSS) scheme encodes n secret images into a universal share and n meaningless shares, and reconstructs lossless secret images. To provide threshold security, the proposed scheme uses random universal share generating function to generate a distinct universal share from base universal share for each secret image. Moreover, to enhance limited sharing capacity of the Boolean-based MSS scheme, we propose a modified Boolean-based MSS scheme. (Year:2016)

Enhanced Contrast of Reconstructed Image for Image Secret Sharing Scheme Using Mathematical Morphology

Visual secret sharing (VSS) is one of the cryptographic techniques of Image secret sharing scheme (ISSS) that performs encoding of secret message image (text or picture) into noise like black and white images, which are called as shares. Shares are stacked together and secret message image is decoded using human visual system. One of the major drawbacks of this scheme is its poor contrast of the recovered image, which improves if computational device is available while decoding. In this paper, we propose to improve poor contrast of classical VSS schemes for text or alphanumeric secret messages and low entropy images. Initially, stacked image is binarized using dynamic threshold value. A mathematical morphological operation is applied on the stacked image to enhance contrast of the reconstructed image. Moreover, a method is proposed that allows the size of the structuring element to change according to the contrast and the size of a stacked image. We perform experiments for different types of VSS schemes, different share patterns, different share types (rectangle and circle), and low entropy images. Experimental results demonstrate the efficacy of the proposed scheme. (Year:2015)

Automatic Registration, Integration and Enhancement of India's Chandrayaan-1 Images with NASA's LRO Maps

Chandrayaan-1 was India's first mission in deep space exploration to the moon. Its Terrain Mapping Camera (TMC) sent images of about 50% of total lunar surface in its limited lifetime and covered polar areas almost completely at a high resolution of 5m/pixel and 10m/pixel. This image dataset has been processed and put in public domain as individual strips of images categorized according to the orbits. The authors have already developed a Lunar GIS including a set of utilities like 3-D vision and exploration, crater detection and search using datasets from NASA's Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) which are of lower resolution than CH1. The objective of this paper is to normalize and register the Chandrayaan-1 images to existing processed data so that all these utilities can be transparently applied to high resolution Chandrayaan-1 datasets. Registration process consists of identification of features in source and target images and estimating appropriate correction for offset, rotation and scaling parameters. Furthermore, due to the low altitude orbit of satellite, the acquired images have displacement of pixels from actual nadir position, which need non-linear correction. This paper describes step by step technique to integrate these high and low resolution images in single framework. (Year: 2015)

Hiding secret message using visual cryptography in steganography

This paper presents two layered security for data hiding by combining steganography and visual cryptography (VC). Classically, VC encrypts a secret image into noise like images called as shares and decrypts secret message by stacking of shares, whereas steganography hides secret into another image called as cover image, where only intended receiver decodes the message. Steganography often encodes secret message using secret key before hiding into another image. In this paper, cover message and encrypted secret message are encoded into noise-like shares using (2, 2) VC where concept of digital invisible ink of steganography is incorporated with VC (DIIVC) to hide secret message. Unlike typical steganography, shares are modified to conceal secret message instead of cover image. At receiver, decryption of shares using conventional VC results poor contrast cover image. Apparently, this result appears as sole secret disclosed using VC whereas only intended receiver has knowledge of secret message. Further, intended receiver retrieves encrypted secret message by applying proposed DIIVC algorithm on resultant cover image. Finally, original secret message is revealed using secret key. (Year:2015)

Reconstruct 3d Human Face Using Two Orthogonal Images

As the 3D human face reconstruction is becoming very popular in recent times, it attracts many researchers. Construction of 3D human face using only two orthogonal images and twelve landmark features are the main context of the proposed approach. For3D object modeling, the Open Graphics Library (OpenGL) is used as the platform through which modeling, modification and rendering is performed on the morphable model. The proposed approach trails through semi-automatic identification of the facial landmark features, calculation of the 3D coordinates of human face, morphable model construction in OpenGL, reshaping of the morphable model and rendering of the morphable model. The facial landmark identification is semiautomatic method, as the module requires the manual interaction for marking the facial landmarks on the image. The reshaping of morphable model is required as the morphable model does not fit to the actual face in most of the cases. The morphable model is reshaped by calculating root mean square (RMS) error of face coordinates. The rendering process does not require a wide screen image because the approach performs rendering using input front face image and side face image as textures. Applications of this research help to overcome the disputes of fields like crime detection, 3D game characterization, the ornaments exhibition and in the area of medical technology like plastic surgery. (Year:2014)

3d Video Streaming For Virtual Exploration Of Planet Surface

Human perception is far superior then vision analysis by computer. The visual objects like distant planet surface (terrain) are not accessible to observer. This could be presented to human analyst in most natural representation like virtual 3D anaglyph for browsing and visual analysis. The paper represents development of a stereoscopic 3D viewer application for planetary surface. For 3D stereo visualization of virtual 3D geo-based data model, Anaglyph Method is most cost-effective as compared to other method like paired epipolar images, Multi-view display, integral imaging display. 3D anaglyph scene generated using Digital Elevation Map (DEM) and satellite imagery data sets for this study. The objective is to use the planetary optical remote sensing data which is available in the public domain database to create an application to study the formation of specific planet surface. Implementation for 3D animated anaglyph DirectX library is used. 3D Terrain generation and rendering is vastly used in computer games, but here we use 3D terrain generation for scientific use like, studies of relative and absolute surface (age), Analysis of Planetary Surface and its formation, Analysis of specific crater for scientific purpose, for safe landing site identification, Mineralogical mapping of planet, Ejecta mapping. (Year:2014)

An Interactive Deblurring Technique for Motion Blur

An interactive deblurring technique to restore a motion blurred image is proposed in this paper. Segment based semiautomated restoration method is proposed using an error gradient descent iterative algorithm. In this approach, segments are automatically detected which are the best representatives of motion blur. Then the decimal parameters of the blur kernel are interactively derived; with extended precision using interpolation between pixels, with comparatively much lower error convergence rate. Once blur kernel is obtained, image is restored using Striling’s interpolation formula. Experimental results show that proposed method gives sufficient restoration as interactive judgment gives the most desirable quality. (Year:2012)

A skull/face superimposition using computer graphics

A computer graphic superimposition technique has been developed and employed to compare antimortem photograph of face with that of recovered skull. A software named here as computer Aided Superimposition Software (CASS-O1) has been developed for this purpose. In this Video digitation technique, photographs of face and skull image have been superimposed on computer graphics by interactive processing. The distances between anatomical points on face and skull were measured and the ratios between selected distances were obtained on computer monitor. The ratios of selected distances between anatomical points in skull have a good agreement with that of face photograph. This fact strengthens the possibility of the skull belonging to the victim's face photograph. The computer graphic superimposition technique as developed here is successfully tested in several case studies. (Year:2001)

Experimental characterization of Silicon Drift Detector for X-ray spectrometry: Comparison with theoretical estimation

Reverse saturation current and the ideality factor (η) are the main parameters that affect the performance of a radiation semiconductor detector in different space environmental conditions. We have measured both of these parameters for the Silicon Drift Detector (SDD) used as a radiation detector in the X-ray spectrometry for space borne applications having the active area of 40 mm2 and 109 mm2 with 450 μm thick silicon. The measured reverse saturation current is compared with the theoretically estimated values using diode equation for various detector operating temperatures and shown that there is a strong dependence of reverse saturation current with ideality factor. Subsequently, using the reverse saturation current ratio method, the slope ratio for small area to the large area SDD is derived and compared with the theoretical slope ratio obtained using the measured ideality factor. It is shown that the slope ratios closely match with the diode equation of the form which has the ideality factor in both the product and exponential terms for these SDDs. The measured spectral energy resolution is ∼150 eV at 5.9 keV for both small and large area SDDs when operated at −40 °C and −65 °C respectively. The noise performance of the spectrometer is also measured in terms of Equivalent Noise Charge (ENC) for various detector operating temperatures and shown that the value of ENC in rms noise electrons is minimal for the pulse shaping time of 3.3 μs. (Year:2016)

Radiation effects on Silicon Drift Detector based X-ray spectrometer on-board Chandrayaan-2 mission

The space radiation damage effects on the Silicon Drift Detector (SDD) has been studied by measuring the leakage current and the energy resolution for various gamma (60Co) and X-ray (55Fe) doses. It is observed that there is no significant change in the leakage current and the energy resolution for the gamma ray dose up to 3 krad. The energy resolution is degraded from ∼ 160 eV to ∼ 210 eV at 5.89 keV for the gamma ray dose of ∼ 10 krad for the detector operating temperature of ∼ -40°C. This meets the requirement of Chandrayaan-2 payload performance for the mission life of 2 years. Irradiation tests were also carried out using 55Fe X-ray source for the doses up to 64 krad and it is observed that there is no significant change in the leakage current and the energy resolution. The radiation damage to the electronic components such as internal JFET (specifically transconductance-gm) and the change in the total input capacitance are quantified by measuring the energy resolution for various pulse shaping time constants, before and after irradiation. In this paper, we present a summary of the irradiation measurements and their effects on the SDD devices. (Year:2015)

A new technique for measuring the leakage current in Silicon Drift Detector based X-ray spectrometer—implications for on-board calibration

In this work, we report a new technique of measuring the leakage current in Silicon Drift Detectors (SDD) and propose to use this technique as a tool for on-board estimation of the radiation damage to the SDD employed in space-borne X-ray spectrometers. The leakage current of a silicon based detector varies with the detector operating temperature and increases with the radiation dose encountered by the detector in the space environment. The proposed technique to measure detector leakage current involves measurement of the reset frequency of the reset type charge sensitive pre-amplifier when the feedback capacitor is charged only due to the detector leakage current. Using this technique, the leakage current is measured for large samples of SDDs having two different active areas of 40 mm2 and 109 mm2 with 450 micron thick silicon. These measurements are carried out in the temperature range of -50°C to 20°C. At each step energy resolution is measured for all SDDs using Fe-55 X-ray source and shown that the energy resolution varies systematically with the leakage current irrespective of the difference among the detectors of the same as well as different sizes. Thus by measuring the leakage current on-board, it would be possible to estimate the time dependent performance degradation of the SDD based X-ray spectrometer. This can be particularly useful in case where large numbers of SDD are used. (Year:2015)

Smartphone-Fpga Based Balloon Payload Using Cots Components

This paper describes a low-cost architecture of multi sensor remote sensing balloon payload design for prototyping student’s micro-satellite payload project. Commercial off the Shelf (COTS) components are being used to implement the payload instrumentation. COTS components provide high processing performance, low power consumption, high reliability, low cost and are easily available. The main architecture of the system consists of commercially available Smartphone, FPGA (fieldprogrammable gate-array) and microcontroller connected together along with sensors and telemetry systems. Presently available smart phones are combination of multiple advanced sensors, different communication channels, powerful operating system and multi-core processors with large non-volatile memory. This also supports high resolution imaging devices for remote sensing application. The smart phone is interfaced to a microcontroller to expand its I/O to interface sensors and FPGA. FPGA supported high speed onboard parallel processing needs and complex controls. Flexible configurations for data acquisition system is provided using built-in A/D converters, counters and timers available in FPGA and microcontroller. The proposed system is an experimental balloon payload for monitoring atmospheric parameters like temperature, humidity, air pollution etc. This can also monitor city traffic, agricultural field and city landscape for security and surveillance. (Year: 2015)

Dependence of leakage current on the performance of Silicon Drift Detector based X-ray spectrometer

We have developed Silicon Drift Detector (SDD) based X-ray spectrometer for the future planetary/space exploration missions. This spectrometer provides the energy resolution of ∼150 eV at 5.9 keV for the pulse peaking time of 3 μs and the detector kept at −40°C. The energy resolution of the SDD based X-ray spectrometer depends on the detector leakage current and the electronics noise associated with the signal readout and processing electronics. We have measured energy resolution and leakage current for two sets of SDDs having active area of 40 mm2 and 109 mm2respectively. It is shown that the leakage current for small area (40 mm2) SDD detector varies from ~0.6 nA at 20°C to ~0.2 pA at −40°C and for large area (109 mm2) SDD detector, the leakage current varies from ~0.9 nA at 20°C to ~1 pA at −50°C. The total measured Equivalent Noise Charge (ENC) of the spectrometer system varies from −34.5 erms at −3°C to 11 erms at −40°C for small area detector and 42 erms at −8°C to 13 erms at −50°C for large area detector. (Year:2013)

3-Phase Power Factor Correction Using Vienna Rectifier Approach and Modular Construction for Improved Overall Performance Efficiency and Reliability

While applications for 1-Phase PFC are now familiar and prevalent, the same is not the case with 3-Phase PFC. Many equipments using kilowatts of power from 3-Phase mains should be candidates of 3-Phase power factor correction, because several advantages ensue, both to the user of the equipment and to the utility. The Vienna Rectifier approach to achieve 3-Phase power factor correction offers many advantages and convenient, user-friendly features as compared to the two-level, six-switch boost PWM Rectifier. Amongst them are: continuous sinusoidal input currents with unity power factor and extremely low distortion; no need for a neutral wire; reduction in voltage stress and switching losses of power semiconductors by almost 40%; immunity towards variation or unbalance in mains 3-Phase voltages or absence of one of the phases; wide mains voltage range: 320VAC to 575 VAC; very low conducted common-mode EMI/RFI; very high efficiency of the order of 97.5%, say, for power levels of 10 KW and input line voltage of 400 VAC and short circuit immunity to failure of control circuit. The paper describes the Vienna Rectifier’s power stage and control techniques, with particular emphasis on modular construction. What is proposed in this paper is a new approach of employing Fuzzy Logic for building controller for Vienna Rectifier DCB Modules for 3- Phase AC to DC power conversion. (Year:2003)

Indian cosmic ray experiment ions (Anuradha) in space shuttle spacelab-3 using CR-39 detectors

An Indian experiment in spacelab-3 has been designed to perform the measurements of ionization states, flux and energy spectrum of elements of Z = 2–26 in the anomalous component of cosmic radiation in the energy region 5–100 MeV/amu. In this experiment, we are using thin CR-39 (DOP) sheets (thickness 250μm) specially prepared by Pershore Moulding Ltd., England, using 32 hrs. curing cycle and 1% dioctyl phythalate. Our study of track response does not show any significant depth dependence or surface to surface variation for this detector. The detector calibration to different accelerated heavy ion beams is presented in separate paper in this conference. The alignment of different sheets in detector module is done using 50 MeV α-beam from VECC, Calcutta, India. The detector module consists of two stacks. The bottom stack is rotated in discrete steps of 40 arc sec once in every 10 sec below top stack which is fixed with main instrument body. This will give time information for each event. The threshold rigidity of the particle will be calculated from arrival time information, spacelab data and trajectory calculations. The lower bound ionization state of a particle can be determined from magnetic threshold rigidity and its total energy. The energy is determined by measuring the total ranges of arriving particles in bottom stack. The 45 kg instrument was successfully flown in NASA's space shuttle spacelab-3 mission during April 29 to May 6, 1985 at an altitude of 352 km and an inclination of 57° latitude.(Year:2002)

Ionization states of cosmic rays: Anuradha (IONS) experiment in Spacelab-3

The measurements of the ionization states, composition, energy spectra and spatial distribution of heavy ions of helium to iron of energies 10–100 MeV/amu in the anomalous cosmic rays are of major importance in understanding their origin which is unknown at present.Anuradha (IONS) cosmic ray experiment in Spacelab-3 was designed to determine the above properties in near earth space and this had a highly successful flight and operations aboard the shuttle Challenger at an orbital altitude of 352 km during 29 April to 6 May 1985. The instrument employs solid state nuclear track detectors (CR-39) of high sensitivity and large collecting area of about 800 cm2 and determines the arrival time information of particles with active elements. Experimental methods, flight operations and preliminary results are briefly described. Initial results indicate that relatively high fluxes of low energy cosmic ray α-particles, oxygen group and heavier ions were obtained. The flight period corresponded to that of quiet Sun and the level of solar activity was close to solar minimum. It is estimated that about 10,000 events of low energy cosmic ray alpha particles with time annotation are recorded in the detector together with similar number of events of oxygen and heavier ions of low energy cosmic rays. (Year:1986)

A microcomputer system for mass spectrometer control and data acquisition

A microcomputer system has been designed for semi-automatic operation of a solid source mass spectrometer used for geochronological studies. It sequentially steps the magnetic field through pre-selected values, reads the digitized ion currents for a given time and temporarily stores the data which can be transferred to a paper tape or directly to a desk top calculator for further analysis. The unit is relatively inexpensive, made of readily available components and can be adapted to many laboratory automation tasks. (Year:1984)

An automatic linear temperature programmer

A linear temperature programmer is fabricated based on the design described by Mills et al. (1977). In this note the authors indicate a few simple additions to the original design which makes the system capable of temperature auto-hold and temperature auto-cut-off operations. A few minor refinements in the original design are also described later. This system has been successfully operated for one year. (Year:1980)

Design of Microcomputer-Based System for on-Line PCM Data Acquisition and Monitoring

To increase the flexibility in PCM decoding system, a microcomputer could play a vital role. A single microcomputer card with one or two peripheral devices replaces large number of conventional hardwired circuits with increasing reliability. This paper deals with various hardware and software aspects of using microcomputer system for PCM decoding, recording, monitoring, feedback controlling and bit-error rate counting. Although such a system has severe limitations in speed, still it could be very useful for many satellite data acquisition systems, communication and scientific applications. (Year:1978)

A Digital Pressure Transducer

The design concept and capabilities of a rocket-borne digital pressure gage capable of measuring atmospheric pressures from 1 atm to 0.5 mm Hg are described. The gage described is of the type devised by Vanderschmidt (1959) and Cambou et al. (1964). Particular attention is given to choice of instrumentation, preflight calibration, and flight results. It is shown that the performance of the digital pressure designed compares well with that of conventional meteorological pressure gages. A major advantage of this pressure transducer is that its output is digital and hence easy to transmit and count. It also has a wide dynamic range of pressure measurements. The range of the instrument can be extended to 0.01 mm Hg by connecting two ionization chambers in parallel. (Year:1975)

Prediction of Bank Investors using Neural Network in Direct Marketing

Direct marketing in banking is one of the most effective methods of predicting potential investors. Effectiveness of direct marketing is being analyzed using different methods like feature correlation, dataset balancing, neural network (NN) etc. Usually sixteen to twenty parameters are collected for training database to evaluate the potential client. A fully connected multilayer NN is developed that gradually optimizes the connection based on training dataset. This NN is used to predict the customer willingness for long term deposit with accuracy hire then 95% which corresponds to Accuracy, Sensitivity and Specificity of 95.19%, 92.32% and 95.42% respectively. One of the important parameter is false negative prediction which is 0.63% for above accuracy. Result of false negative indicates incorrectly predicting unwilling clients. With our algorithm, analyzing UCI test benchmark dataset gives 276 true prediction out of 451 records of customers who buy the bank product and only 23 false prediction out of 3668 records of customers who did not buy the bank product. This may be noted that false negative to true negative ratio increases rapidly with small decrease of accuracy. 2% decrease from 95% increases the false negative value from 23 to 379. Such increase leads to several fold non productive persuasion effort. On the other hand decrease in true positive reduces the true buyer but do not reduce the productivity due to false prediction. However it is seen that increase of network size do not increase the accuracy even after several hours of training. Hence an optimum size of the network needs to be achieved with automatic iterative pruning. (Year: 2018)

Site Name
[email protected]