博電競菠菜平臺 www.rrkuhedt.buzz Kostantin P. Nikolic
This paper presents the application of stochastic search algorithms to train artificial neural networks. Methodology approaches in the work created primarily to provide training complex recurrent neural networks. It is known that training recurrent networks is more complex than the type of training feedforward neural networks. Through simulation of recurrent networks is realized propagation signal from input to output and training process achieves a stochastic search in the space of parameters. The performance of this type of algorithm is superior to most of the training algorithms, which are based on the concept of gradient. The efficiency of these algorithms is demonstrated in the training network created from units that are characterized by long term and long shot term memory of networks. The presented methology is effective and relative simple.
M. A. Lone, S. A. Mir , Imran Khan and??M. S. Wani
This article deals with the problem of finding an optimal allocation of sample sizes in stratified sampling design to minimize the cost function. In this paper the iterative procedure of Rosen’s Gradient projection method is used to solve the Non linear programming problem (NLPP), when a non integer solution is obtained after solving the NLPP then Branch and Bound method provides an integer solution.
M. A. Lone, S. A. Mir and M. S. Wani
In this paper, we investigate a Transportation problem which is a special kind of linear programming in which profits; supply and demands are considered as Intuitionistic triangular fuzzy numbers. The crisp values of these Intuitionistic triangular fuzzy numbers are obtained by defuzzifying them and the problem is formulated into linear programming problem. The solution of the formulated problem is obtained through LINGO software. If the obtained solution is non-integer then Branch and Bound method can be used to obtain an integer solution.
Mamta Padole1 and?Pratik Kanani2
Encryption, itself doesn’t prevent interception, but denies the message content to the interceptor. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is impossible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, large computational resources and skills are required to decrypt it. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients, but not to unauthorized interceptors.
Data hiding is the skill of hiding messages in such a ways the only the sender and the receiver of the message knows that the message has been hidden. In the context of secured information transmission and reception, efficient techniques for data encryption and decryption are very much essential. In this paper, a message transfer application is developed which mainly focusses on secret message transfer between two parties. Used scheme not only focusses on text but also the encryption of images and videos. A unique algorithm i-Se4GE is used for the transfer of keys required for the encryption and decryption of messages. This algorithm makes use of a set of random numbers, timestamp of the two users, a set of public keys and dynamic keys. The algorithm uses a two-step authentication request-reply process which provides it a double layer security.
Mamta Padole1,?Pratik Kanani2 , Leena Raut1?, Dhyanvi Jhaveri1 and Manali Nagda1
Thecurrent version of Internet Protocol (IPv4) has not been substantially changed in the past 25 years. IPv4? has proven to be robust and easily implemented. In the early stage, the deployment of IPv6 is prepared and begun on the IPv4-based network. In the intermediate stage, IPv4 and IPv6 coexist. In the later stage, IPv6 plays a leading role on the network and the IPv4 network is gradually withdrawing from the market. Meanwhile, researchers put forward many transition mechanisms for different network infrastructures and different evolution stages. In this paper, a detailed study is made on IPv4 along with its different smart saving techniques. Which help in delay of IPv4 to IPv6 shifting delays. Also different addressing schemes are discussed which remains unchanged in future. Along with that limitations of IPv4 is also focused so present IPv4 network infrastructure can be more secured till IPv6 realization. Further the need for IPv6 is discussed along with its header and address formats.
The communications sector is undergoing significant changes, with the emergence of a number of platforms available to provide a different range of services. Some of these platforms are complementary to each other, while others are competitive, or can provide a valid substitute for some of the services provided. Up till now, the most important communications platform in most of the developing countries has been the public switched telecommunication network (PSTN) which provides access to all households and buildings. This universality in providing access has also meant that the network has generally been designated as one for universal service.
This paper reports a study conducted in 2015 at the different institutions of different of Jammu and Kashmir aimed to obtain a much clearer picture about the use and knowledge of e-learning at the institutions so as to give new idea about development of e-learning (online courses) usage in institutions. Multiple sources of data were collected, including:? questionnaires and interviews with academics, students and administration. The information illustrate that e-learning (online courses) in institutions of Jammu and Kashmir is still largely in the “innovators” and “early adopters” stages. There lies a “chasm” ahead inhibiting moving further into the “mainstream” area. The analysis of the information revealed that how much the academics and students know about the e-learning (online courses) and how much they are implementing in their lectures/studies. The focus of the chapter is on how one can strengthen this alignment to be able to bridge the chasm. The study has been successful in eliciting institutions support for changes to the e-learning support system.
Ronak Vihol1*,?Hiren Patel2 and Nimisha Patel1,3
Offering “Computing as a utility” on pay per use plan, Cloud computing has emerged as a technology of ease and flexibility for thousands of users over last few years. Distribution of dynamic workload among available servers and efficient utilization of existing resources in datacenter is one of the major concerns in Cloud computing. The load balancing issue needs to take into consideration the utilization of servers, i.e. the resultant utilization should not exceed the preset upper limits to avoid service level agreement (SLA) violation and should not fall beneath stipulated lower limits to avoid keeping some servers in active use. Scheduling of workload is regarded as an optimization problem that considers many varying criterion such as dynamic environment, priority of incoming applications, their deadlines etc. to improve resource utilization and overall performance of Cloud computing. In this work, a Genetic Algorithm (GA) based novel load balancing mechanism is proposed. Though not done in this work, in future, we aim to compare performance of proposed algorithms with existing mechanisms such as first come first serve (FCFS), Round Robin (RR) and other search algorithms through simulations.
In the field of Image mosaicing, much research has been done to fulfil the two major challenges, time complexity and quality improvement. Proposed method is a pre-processing step before actual image stitching carried out.? The method aims to find out the overlapping regions in two images. Thus features can be extracted from these overlapping regions and not from the whole images, which result into reduction of computation time. For detecting overlapping portion, gradient based edge extraction method and invariant moments are used. In the deduced region, SIFT features are extraction to determine the matching features. The registration process carried out by RANSAC algorithm and final output mosaic will obtained by warping the images. An optimized approach to calculate the moment difference values is presented to improve time efficiency and quality.
Rajan Patel1*,Sumaiya Vhora2?and Nimisha Patel1
Packet drop (grayhole/blackhole) attack is occurs at a network layer to discard the packets in MANET. It is essential to detent and prevent this attack for improving performance of network. This article provides the packet drop attack detection and prevention using RBDR (Rank Based Data Routing) for AOMDV routing protocol. The fields of RBDR are generated with routing information and analysis behavior of network for detecting the malicious paths.?The scheme is to identify the?malicious paths for preventing the packet drop attack and also able to find the trusted multiple disjoint?loop free routes for data delivery in MANET. The simulation is conducted in NS2 using AOMDV reactive routing protocol and analyze with packet loss delivery, average end-to-end delay?and packet delivery ratio.?The proposed technique can reduce the effect of?packet drop attack.
MANET is self organizing, decentralized and dynamic network. In which participating nodes can move anywhere. The nodes can be host or router anytime . Mobile ad hoc network is decentralized network so if one node is participating as router for particular time but if that node leave network then it is very difficult to transfer data packets. The main feature of MANET network of self organizing capability of node has advantage and disadvantage as well. By this it is easy to maintain network and convert topology but at same time we need to tolerate data transfer. The MANET is also used for big network and internet but there is no smart objects like IoT which can share information machine to machine. Now rapidly increase internet users worldwide to access global information and technology . IoT is basically used to converge applications and services to open global business opportunities which can use I-GVC (Information-driven Global Value Chain) for efficient productivity.
P. Raikwal1*, V. Neema1?and A.Verma2
Memory has been facing several problems in which the leakage current is the most severe. Many techniques have been proposed to withstand leakage control such as power gating and ground gating.? In this paper a new 8T SRAM cell, which adopts a single bit line scheme has been proposed to limit the leakage current as well as to gain high hold static noise margin. The proposed cell with low threshold voltage, high threshold voltage and dual threshold voltage are used to effectively reduce leakage current, and delay. Additionally, the comparison has been performed between conventional 6T SRAM cell and the new 8T SRAM cell. The proposed circuit consumes 671.22 pA leakage current during idle state of the circuit which is very less as compare to conventional 6T SRAM cell with sleep and hold transistors and with different β ratio. The proposed new 8T SRAM cell shows highest noise immunity 0.329mv during hold state. Furthermore, the proposed new 8T SRAM circuit represents minimum read and write access delays 114.13ps and 38.56ps respectively as compare to conventional 6T SRAM cell with different threshold voltages and β ratio.
Ashish Kumar Jain?and Vrinda Tokekar
Mobile ad hoc network (MANET) possess self-configuration, self-control and self-maintenance capabilities. Nodes of MANET are autonomous routers. Hence, they are vulnerable to security attacks. Collaborative attacks such as black hole and wormhole in MANET are difficult to be detected and prevented. ?Trust based routing decision is an effective approach for security enhancement in MANET. In this study, trust computing using fuzzy based max-product composition scheme is applied to compute aggregated trust values to determine malicious nodes and thereby safe route in MANETs. The results show performance improvement of proposed protocol over AODV protocol. Network metrics are analysed under different mobility conditions and different positions of black hole nodes.
Nalin Kumar?and M. Nachamai
Noise removal techniques have become an essential practice in medical imaging application for the study of anatomical structure and image processing of MRI medical images. To report these issues many de-noising algorithm has been developed like Weiner filter, Gaussian filter, median filter etc. In this research work is done with only three of the above filters which are already mentioned were successfully used in medical imaging. The most commonly affected noises in medical MRI image are Salt and Pepper, Speckle, Gaussian and Poisson noise. The medical images taken for comparison include MRI images, in gray scale and RGB. The performances of these algorithms are examined for various noise types which are salt-and-pepper, Poisson, speckle, blurred and Gaussian Noise. The evaluation of these algorithms is done by the measures of the image file size, histogram and clarity scale of the images. The median filter performs better for removing salt-and-pepper noise and Poisson Noise for images in gray scale, and Weiner filter performs better for removing Speckle and Gaussian Noise and Gaussian filter for the Blurred Noise as suggested in the experimental results.?
K. Geetha1 and?R. Vadivel2
Process of identifying the end points of the acoustic units of the speech signal is called speech segmentation.? Speech recognition systems can be designed using sub-word unit like phoneme. A Phoneme is the smallest unit of the language. It is context dependent and tedious to find the boundary.? Automated phoneme segmentation is carried in researches using Short term Energy, Convex hull, Formant, Spectral Transition Measure(STM), Group Delay Functions, Bayesian Information Criterion, etc.? In this research work, STM is used to find the phoneme boundary of Tamil speech utterances.? Tamil spoken word dataset was prepared with 30 words uttered by 4 native speakers with a high quality microphone. The performance of the segmentation is analysed and results are presented.
Chittaranjan Mangale*, ShyamSundar Meena and Preetesh Purohit
Stock market is very versatile and fluctuates with time. For the same way it becomes difficult to predict?movement of?the stock, there are various approaches and tools through which the price of the stock?is determined by the?past patterns. Mostly the approaches are in terms of fundamental approach and technical approach. For the?long-term?valuation fundamental approach?is used. Every stock is having its own value that does not depend on the price of the stock that is known as Intrinsic value. The proposed model works through phases of data collection, feature processing, fuzzy logic mapping and stock value calculation. Fuzzy logic?is used?to map the quality as well as quantity valuation factors. The IF THEN rules?are applied?on the linguistic variable.? The fuzzy model outcomes the stock value which is used to provide stock worth. The stock value?is calculated?by Dividend discount model. Accuracy of the system is 0.77. The results?offer?the backbone for the value?and not?the price.
Denver Braganza and B. Tulasi
The landscape of Internet of Things (IoT) has been evolving at an increasing rate over the recent years. With the ease of availability of mobile devices, there has been a tremendous leap in technology associated with it. Thus, the need for efficient intercommunication among these devices arises. To ensure that IoT is seamlessly integrated into the daily life of people using appropriate technology is essential. One of the important associated technologies with IoT is RFID. RFID proves to be a simpler and efficient technology to implement IoT at various levels. Since IoT is greatly imapcting the lives of people, one of the major concerns of IoT is the security. IoT will have millions of devices and users connected to each other.
It is important to authenticate both users and devices to prevent any breach of information. With the limitations in RFID technology, various authentication protocols have been developed to provide optimal solutions.
Sumit Roy?and R. Kavitha
Virtual reality is becoming one of the seamless technology which can be used to treat several psychological problems such as anxiety disorders. With the advancement of technology virtual reality is becoming available to ordinary practitioners to carry out non-clinical therapies. An effective virtual reality system provides the user with total immersion and becomes a part of the virtual world. This study provides an insight as how virtual reality could provide means to overpower anxiety disorders through a controlled environment which is being projected to participants suffering from specific phobias.
V.B. Kumar Vatti1, Ramadevi Sri1?and??M.S. Kumar Mylapalli2
In this paper, we suggest and discuss an iterative method for solving nonlinear equations of the type f(x)=0 ?having eighteenth order convergence. This new technique based on Newton’s method and extrapolated Newton’s method. This method is compared with the existing ones through some numerical examples to exhibit its superiority.
AMS Subject Classification:? 41A25, 65K05, 65H05.
Kamath Aashish?and A. Vijayalakshmi
Face detection technologies are used in a large variety of applications like advertising, entertainment, video coding, digital cameras, CCTV surveillance and even in military use. It is especially crucial in face recognition systems. You can’t recognise faces that you can’t detect, right? But a single face detection algorithm won’t work in the same way in every situation. It all comes down to how the algorithm works. For example, the Kanade-Lucas-Tomasi algorithm makes use of spatial common intensity transformation to direct the deep search for the position that shows the best match. It is much faster than other traditional techniques for checking far fewer potential matches between pictures. Similarly, another common face detection algorithm is the Viola-Jones algorithm that is the most widely used face detection algorithm. It is used in most digital cameras and mobile phones to detect faces. It uses cascades to detect edges like the nose, the ears etc. However, if there is a group of people and their faces are close to each other, the algorithm might not work that well as edges tend to overlap in a crowd. It might not detect individual faces. Therefore, in this work, we test both the Viola-Jones and the Kanade-Lucas-Tomasi algorithm for each image to find out which algorithm works best in which scenario.
Neelam Dabas1, Rampal Singh2?and?Vikash Chaudhary3
Modification of media and illegal production is a big problem now a days because of free availability of digital media. Protection and securing the digital data is a challenge. An Integer Wavelet Transformation (IWT) domain based robust watermarking scheme with Singular Value Decomposition (SVD) and Extreme Learning Machine (ELM) have been proposed and tested on different images. In this proposed scheme, a watermark or logo is embedded in the IWT domain as ownership information with SVD and ELM is trained to learn the relationship between the original coefficient and the watermarked one. This trained ELM is used in the extraction process to extract the embedded logo from the image. Experimental results show that the proposed watermarking scheme is robust against various image attacks like Blurring, Noise, Cropping, Rotation, Sharpening etc. Performance analysis of proposed watermarking scheme is measured with Peak Signal to Noise Ratio (PSNR) and Bit Error Rate (BER)
Kumar Rahul1,?Brijesh Kumar Sinha1 and?Vijay Kumar2
Objects needs verification through statistical model in software development process which are important in software industries now a day. Software development process consist of several steps right from analysis to deployment and maintenance, therefore statistical model would certainly analyses object(s) and its various qualities and its relationship during software development process. Earlier, we have designed a TMS where, object(s) being available for various purposes like accessibility, reusability in a development of software product or embedded product, thus statistical model justify the level of accessibility in terms of profitability and quantity of access. So far various statistical models have been implemented to identify and established the relationship but not all statistical model are used to analyses and calculate the parametric standard and determine the reusability factor in software development process model. In fact, this statistical model justified at various level of development and would help in determine cost of accessibility (CoA) and cost of reusability (CoR)
Sachin Lalar and Arun Kumar Yadav
Routing protocol is the essential and vital performance factor in the Mobile Ad-hoc Network. The routing protocols in MANET are accomplished to handle a lot number of nodes with restricted resources. There is a variety of routing protocol exist in MANET. The routing protocol which is chosen may have an effect on the performance of network. In this paper, We perform a comparative study of DSDV, CSGR, WRP, AODV, OLSR, DSR, TORA, ZRP, ZHLS, DYMO routing protocol with respect to Routing Approaches, Routing structure, Route selection, Route, Routing table, Route maintenance, Operation of protocols, Strength, Weakness.
Siddhartha Sankar Biswas
In this paper the author introduces the notion of Z-weighted graph or Z-graph in Graph Theory, considers the Shortest Path Problem (SPP) in a Z-graph. The classical Dijkstra’s algorithm to find the shortest path in graphs? is not applicable to Z-graphs. Consequently the author proposes a new algorithm called by Z-Dijkstra's Algorithm with the philosophy of the classical Dijkstra's Algorithm to solve the SPP in a Z-graph.
T.Venkat Narayana Rao1, Sohail Ali Shaik1?and S. Manminder Kaur1
Predictive analytics plays an important role in the decision-making process and intuitive business decisions, by overthrowing the traditional instinct process. Predictive analytics utilizes data-mining techniques in order to predict the future outcomes with a high level of certainty. This advanced branch of data engineering is composed of various analytical and statistical methods which are used to develop models that predict the future occurrences. This paper examines the concepts of predictive analytics and various mining methods to achieve the prior. In conclusion, paper discusses process and issues involved in the knowledge discovery process.
Sneha?and Shoney Sebastian
Traditional way of storing such a huge amount of data is not convenient because processing those data in the later stages is very tedious job. So nowadays, Hadoop is used to store and process large amount of data. When we look at the statistics of data generated in the recent years it is very high in the last 2 years. Hadoop is a good framework to store and process data efficiently. It works like parallel processing and there is no failure or data loss as such due to fault tolerance. Job scheduling is an important process in Hadoop Map Reduce. Hadoop comes with three types of schedulers namely FIFO (First in first out), Fair and Capacity Scheduler. The schedulers are now a pluggable component in the Hadoop Map Reduce framework. This paper talks about the native job scheduling algorithms in Hadoop. Fair scheduling algorithm is analysed with its algorithm considering its response time, throughput and performance. Advantages and drawbacks of fair scheduling algorithm is discussed. Improvised fair scheduling algorithm is proposed with new strategy. Analysis is made with respect to response time, throughput and performance is calculated in naive fair scheduling and improvised fair scheduling. Improvised fair Scheduling algorithms is used in the cases where there is jobs with high and less processing time.
Remya Thomas and M. Nachamai*
Antivirus as name implies prevent the devices such as computers, mobiles and pen-drive from viruses. All gadgets which interact with open network are prone to virus. Virus is a malicious software program which replicates by copying its code multiple times or by infecting any computer program (like modifying the existing program) which can affect its process. Virus perform harmful task on affected host computer such as possessing on hard disk, CPU time, accessing private information etc. This paper specifies the performance of (McAFee, Avast, Avira, Bitdefender, Norton) antivirus and its effectiveness on the computer. The performance is tested based on the time acquired by each antivirus to act on a computer. The parameters used to analyze the performance are quick scan, full scan and custom scan with respect to time. Through the analysis Bitdefender performance is better than other selected antivirus.
Helen Treasa Sebastian and Rupali Wagh
Since the beginning of data mining the discovery of knowledge from the Databases has been carried out to solve various problems and has helped the business come up with practical solutions. Large companies are behind improving revenue due to the increase loss in customers.
The process where one customer leaves one company and joins another is called as churn. This paper will be discussing how to predict the customers that might churn, R package is being used to do the prediction. R package helps represent large dataset churn in the form of graphs which will help to depict the outcome in the form of various data visualizations. Churn is a very important area in which the telecom domain can make or lose their customers and hence the business/industry spends a lot of time doing predictions, which in turn helps to make the necessary business conclusions. Churn can be avoided by studying the past history of the customers. Logistic Regression is been used to make necessary analysis. To proceed with logistic regression we must first eliminate the outliers that are present, this has be achieved by cleaning the data (for redundancy, false data etc) and the resultant has been populated into a prediction excel using which the analysis has been performed.
C. S. Vikas1 and ?Ashok Immanuel2
Saving peoples live is the most important in today’s world. The best way to save lives is to have an ambulance system which is effective and can be reachable to the user/client, this paper gives the solution which focus to make the ambulance available to a nearby user/client/patient in the?least possible time which will help save many lives. After extensive study and analysis new technology evolved. Navigator.geolocation method based on RestFUL Web Services is used this will help the ambulance location to be updated in the database so that it can be seen by the user who is using this application and makes it easy for him to book the ambulance. The client’s location will be pin pointed on the google map and even the ambulance which is nearby the user will be pin pointed on the map, once the patient is on board the ambulance location is taken and then the list of hospitals are pointed out on the map which helps the ambulance driver to choose the nearby hospital to take the patient on time.
Purnima Sachdeva and K N Sarvanan
Bike sharing systems have been gaining prominence all over the world with more than 500 successful systems being deployed in major cities like New York, Washington, London. With an increasing awareness of the harms of fossil based mean of transportation, problems of traffic congestion in cities and increasing health consciousness in urban areas, citizens are adopting bike sharing systems with zest. Even developing countries like India are adopting the trend with a bike sharing system in the pipeline for Karnataka. This paper tackles the problem of predicting the number of bikes which will be rented at any given hour in a given city, henceforth referred to as the problem of ‘Bike Sharing Demand’. In this vein, this paper investigates the efficacy of standard machine learning techniques namely SVM, Regression, Random Forests, Boosting by implementing and analyzing their performance with respect to each other.This paper also presents two novel methods, Linear Combination and Discriminating Linear Combination, for the ‘Bike Sharing Demand’ problem which supersede the aforementioned techniques as good estimates in the real world.
Tenzin Dawa1 and?N. Vijayalakshmi2
Face Recognition is a biometric system which can be used to identify or verify a person from digital image by using the facial features that are unique to each other. There are many techniques which can be used in a face recognition system. In this paper we review some of the algorithms and compare them to see which technique is better compared to one another. Techniques that are compared in this technique are Non-negative matrix factorization (NMF) with Support Vector Machine (SVM), Partial Least Squares (PLS) with Hidden Markov Model (HMM) and Local Ternary Pattern (LTP) with Booth’s Algorithm.
Alan J. George and?Deepa V. Jose
Energy efficiency has always remained a pressing matter in the world of Wireless Sensor Networks. Irrespective of the number of routing protocols that exist for Wireless Sensor Networks, only a handful can be named as efficient. Yet above all these routing protocols stands the emblematic one, the LEACH protocol. This research work is aimed at bringing forth a new routing strategy based on the LEACH protocol, which aims at improving the energy efficiency in Wireless Sensor Networks and applying the given clustering technique, in randomly deployed and fixed sensor network simulation environment using MATLAB. In depth simulations have proven that the proposed clustering strategy gives better performance compared to LEACH based on the lifetime of the Nodes. A comparative analysis of the rate of energy consumed on various node deployment strategies has also been carried out.
M. S. Vaishnavi*?and??A. Vijayalakshmi
Aging face recognition poses as a key difficulty in facial recognition. It refers to identification of a person face over varied ages. It includes issues like age estimation, progression and verification. Non-availability of facial aging databases make it harder for any system to achieve good accuracy as there are no good training sets available. Age estimation when done correctly has a varied number of real life applications like age detailed vending machines, age specific access control and finding missing children. This paper implements age estimation using Park Aging Mind laboratory - Face database that contains metadata and 293 unique images of 293 individuals. Ages range from 19 to 45 with a median age of 32. Race is classified into two categories : African-American and Caucasian giving an accuracy of 98%. Sobel edge detection and Orthogonal locality preservation projection were used as the dominant features for the training and testing of age estimation. A Multi-stage binary classification using support vector machine was used to classify images into an age group thereafter predicting an individual’s age. The effectiveness of this method can be increased by using a large dataset with a wider age range.
S. Sebastian?and?R. S. Chouhan
It is essential to maintain a ratio between privacy protection and knowledge discovery. Internet users? depend daily on SSL/HTPS for secure communication on internet.
Over the years, many attacks on the certificate trust model it uses have been evolved. Mutual SSL authentication shared verification alludes to two parties validating each other through checking the digital certificate so that both sides are guaranteed of the other’s identity.
In technical terms, it alludes to a client (web program or client application) authenticate themselves to the server (server application) and that server likewise confirming itself to the client through checking the general public key certificate issued by trusted Certificate Authorities (CA). Since confirmation depends on computerized Certificate, certification authorities, for example, Verisign or Microsoft Declaration Server are a critical part of mutual authentication process.
From an abnormal state perspective, the way toward authenticating and setting up an encrypted channel using certificate-based mutual SSL authentication.
Nilesh J. Hadiya1, Neeraj Kumar1, B. M. Mote2, Chiragkumar. M. Thumar1?and D. D. Patil2
A field experiment was conducted during kharif season of 2015 to assess the prediction performance of CERES-Rice and WOFOST model for grain and straw yield of three rice cultivars viz., (V1:Jaya, V2: Gurjari and V3: GNR-2) sown under four different environments viz., (D1: 10/07/2015, D2: 25/07/2015, D3: 09/08/2015 and D4: 24/08/2015) with two nitrogen levels N1:75 and N2:100 kg NPK/ha-1.Results showed that the prediction of WOFOST model forgrain yield of rice cultivars under different treatments more close to the corresponding observed value with percent error PE between (18.66%)as camper to CERES-rice model with PE (28.56%), but for straw yield CERES-rice model give more close prediction than WOFOST model with PE (20.99%) and (27.33%) between predicted and observed value.