Monday, July 30, 2018

NETWORKING

NETWORKING


Sai Info solution provide the Project Development & Training. We Develop Project for BE/ME/PHD.A computer network, or data network, is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections (data links) between nodes. These data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate, route and terminate the data are called network nodes.[1] Nodes can include hosts such as personal computersphonesservers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Webdigital videodigital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications as well as many others. Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, topology, control mechanism and organizational intent. The best-known computer network is the Internet.


Today will see one example application of  A  Big Data Analytics in Mobile Cellular Networks


Big Data Analytics in Mobile Cellular Networks

ABSTRACT

In Big data analytic system use in the mobile  cellular network. For transfer the big data over the  network. Mobile cellular systems have turned out to be  both the generators and carries of massive data. Big  data analytic can enhance the execution of versatile cell systems and furthermore boost the income of  administrators. In this paper, we present a brought  together data show in light of the irregular grid  hypothesis and machine learning. At that point, we  exhibit an engineering structure for applying the big  data investigation in the portable cell systems. Also, we  portray a few illustrative cases, including big signaling  data, big traffic data, big location data, big radio  waveform data, and big heterogeneous data, in  versatile cell systems. In this cellular network classified  the data and, send the data from sender to receiver. In  this paper we utilize hadoop concept. In that utilization the mapper and reducer concepts are used.


                         Fig. 1 Big Data Analytics in Mobile Cellular Networks


INTRODUCTION

In big data analytic system sender send the data to the  recipient over the system. At the time of data sending,  data get categorized like in signal, location, traffic and  waveform. Big Data is an expression used to mean a gigantic volume of both organized and unstructured  data that is so big it is hard to process utilizing custom-ary database and programming procedures. In most  endeavor situations the volume of data is too enormous  or it moves too quickly or it surpasses current handling  limit. Big Data can possibly help organizations enhance  operations and make speedier, cleverer choices. This  data, when caught, organized, controlled, put away, and  broke down can help an organization to increase valu-able knowledge to build incomes, get or hold clients,  and enhance operations. Enormous data is a term for  detail indexes that are so expansive or complex that customary data handling applications are deficient to manage  them. Challenges incorporate analytic, capture data, data  curation,search, sharing, stockpiling, exchange, representation, questioning, and refreshing and data security.  The expression "Big data" regularly alludes essentially to  the utilization of prescient investigation, client conduct  examination, or certain other propelled data  investigation strategies that concentrate an incentive  from data, and rarely to a specific size of detail  collection. Researchers, business officials, specialists of  solution, promoting and governments alike consistently  meet troubles with big detail indexes in regions  including Internet look, back, urban informatics, and  business informatics. Researchers experience  restrictions in e-Science work, including meteorology,  genomics, complex material science recreations,  science and ecological research. Big data is the accumulation of data sets so extensive  and complex that it winds up noticeably hard to process  utilizing customary data preparing applications .  With late advances of remote innovations and  expanding in the portable applications, versatile cell  systems have turned out to be both generators and  transporters of big data. Generally data examination manages the organized data that is data contained in  social databases and spreadsheets. Enormous data investigation is fit for gathering the scattered data, for understanding the client practices from the numerous  points of view. It incorporate the endorsers' living  propensities and the timetable can befor the most part  deduced from the utilization of movement cover  distinctive eras of a day, surfing propensities and their much of the time went by spots or the scope of  exercises can be around gotten from home area enlist . Enormous data investigation, administrators can screen their framework progressively, and settle on self-ruling and element choices. Mobile Service Providers  (MSPs) are procedures enormous measures of client  created call records every day. Breaking down this big data can help in settling the absolute most basic issues  in MSPs. With the dangerous development of big data  and high level of activity in the portable cell systems  are taken care of by the hadoop structure and Map  Reduce programming model can be proposed and give  the security to high movement data[. For  investigating and limiting the system activity, an  expansive scale structure in light of hadoop is being utilized.




HADOOP

Hadoop is an entire eco-arrangement of open source  extends that give us the structure to manage huge  information. Hadoop works in a comparable  configuration. On the base we have machines  orchestrated in parallel . These machines closely  resemble singular patron in our relationship. Each  machine has an information hub and an errand tracker.  Information hub is otherwise called HDFS (Hadoop  Distributed File System) and Task tracker is otherwise  called delineate. Information hub contains the whole  arrangement of information and Task tracker does  every one of the operations. You can envision  assignment tracker as your arms and leg, which  empowers you to do an errand and information hub as  your cerebrum, which contains all the data which you  need to prepare. These machines are working in  storehouses and it is extremely fundamental to facilitate  them. The Task trackers (Project administrator in our  similarity) in various machines are composed by a Job  Tracker. Work Tracker ensures that every operation is finished and if there is a procedure disappointment at any hub, it needs to dole out a copy assignment to some  errand tracker. Work tracker additionally disperses the  whole assignment to all the machines. A name hub then again arranges every one of the information hubs . It administers the appropriation of information heading off to each machine. It likewise checks for any sort of cleansing which have occurred on any machine. In the event that such cleansing happens, it finds the copy information which was sent to other information hub and copies it once more. You can think about this name hub as the general population chief in our relationship which is concerned more about the maintenance of the whole dataset.


REFERENCE ARTICLES 


Saturday, July 28, 2018

Data Mining

DATA MINING


Sai Info solution provide the Project Development & Training. We Develop Project for BE/ME/PHD.. Data mining is the process of discovering patterns in large data sets involving methods at the intersection of machine learningstatistics, and database systems An interdisciplinary subfield of computer science, it is an essential process — wherein intelligent methods are applied to extract data patterns— the overall goal of which is to extract information from a data set, and transform it into an understandable structure for further use.[1] Aside from the raw analysis step, it involves database and data management aspects, data pre-processingmodel and inference considerations,interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. The term is a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collectionextractionwarehousinganalysis, and statistics) as well as any application of computer decision support system, including artificial intelligence, machine learning, and business intelligence. The book Data mining: Practical machine learning tools and techniques with Java[8] (which covers mostly machine learning material) was originally to be named just Practical machine learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scaledata analysis and analytics – or, when referring to actual methods, artificial intelligence and machine learning – are more appropriate.The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule miningsequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, but do belong to the overall KDD process as additional steps.The related terms data dredgingdata fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.



Today will see one example application of  A Personalized Mobile Search Engine



Personalized Mobile Search Engine


ABSTRACT


We propose a personalized mobile search engine,PMSE, that captures the users’ preferences in the form of concepts by mining their click through data. Due to the importance of location information in mobile search, PMSE classifies these concepts into content concepts and location concepts. In addition, users’ locations (positioned by GPS) are used to supplement the location concepts in PMSE. The user preferences are organized in an ontology-based, multi-facet user profile, which are used to adapt a personalized ranking function for rank adaptation of future search results. To characterize the diversity of the concepts associated with a query and their relevances to the users need, four entropies are introduced to balance the weights between the content and location facets. Based on the client-server model, we also present a detailed architecture and design for implementation of PMSE. In our design, the client collects and stores locally the clickthrough data to protect privacy, whereas heavy tasks such as concept extraction, training and reranking are performed at the PMSE server. Moreover, we address the privacy issue by restricting the information in the user profile exposed to the PMSE server with two privacy parameters. We prototype PMSE on the Google Android platform. Experimental resultsshow that PMSE significantly improves the precision comparing to the baseline.







Fig . The general process flow of PMSE



INTRODUCTION

Social Network is a social structure made of individuals called nodes, which are connected by one or more specific types of interdependency, such as friendship, kinship, financial exchang dislike, sexual relationships, or relationships o beliefs, knowledge or prestige [1]. Social Network’s link represents not only the flow between personal information, but the relation status through quantitative expression. The overall graph model of Social Network is composed of many nodes and the links that connect them, and each node’s direct/indirect connection forms the entire network.However, the current Personalized Systems based on Social Network were designed and constructed under the PC and it didn’t provide the step by step transferring methods from PC to Smartphone. To solve these problems, this research actively analyzes an individual’s characteristi based on the Social Network environment and develops a Personalized Information Retrieval System which can search for what a user wants accurately. Personalized Information Retrieval System for efficient personalized information provision proposed in this study differs from existing ones in methodology as follow: Firstly, as the system is built on the basis of NFC (Near field communication), it attempts to provide its own custom service fast and easily using its information stored in NFC. Once SNS and NFC Smartphone are associated with each other, payment is made by touching a NFC tag when visiting well known restaurants, and the information recorded in SNS is supposed to provide search results customized to individual’s tastes and preferences when carrying out asearch in individualized search system. That is, typing the same search keyword may bring different search results on NFC Smartphone as individuals have different preferences. Secondly, the existing Personalized Information Retrieval System fails to analyze the search system using Smartphone in Social Network environment. With anincreasing number of web users using Smartphone and its individualized service under research, Smartphone environment does not provide user’s search rankings suited to personal preferences. For example, when a user who wants to come by a pasta restaurant offering pasta for about 10$ and listens to rock music asks fo information search via Smartphone, search results should also be prioritized and provided in favor of user’s personalization taste. But, the existing systems do not show search rankings in consideration of individual’s tastes and tastes.





REFERENCE ARTICLES 

  1.  Facilitating Document Annotation using Content and Querying Value
  2. F5GA Steganographic Algorithm High Capacity Despite Better Steganalysis
  3. Topic Mining over Asynchronous Text Sequences
  4. Building Domain Ontologies from Text for Educational Purposes
  5. Online Interactive E-Learning Using Video Annotation

  





Wednesday, July 25, 2018

IMAGE PROCESSING


 Sai Info solution provide the Project Development & Training. We Develop Project for BE/ME/PHD.
In computer sciencedigital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.
Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion LaboratoryMassachusetts Institute of TechnologyBell LaboratoriesUniversity of Maryland, and a few other research facilities, with application to satellite imagerywire-photo standards conversion, medical imagingvideophonecharacter recognition, and photograph enhancement.[1] The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images then could be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and generally, is used because it is not only the most versatile method, but also the cheapest.


Today will see one example application of Watermarking Security Theory and Practice

  

Watermarking Security Theory and Practice



ABSTRACT

 watermarking security based on a cryptanalysis point of view. The main idea is that information about the secret key leaks from the observations, for instance watermarked pieces of content, available to the opponent. Tools from information theory (Shannon’s mutual information and Fisher’s information matrix) can measure this leakage of information. The security level is then defined as the number of observations the attacker needs to successfully estimate the secret key. This theory is applied to two common watermarking methods: the substitutive scheme and the spread spectrum based techniques. Their security levels are calculated against three kinds of attack. The experimental work illustrates how Blind Source Separation (especially Independent Component Analysis) algorithms help the opponent exploiting this information leakage to disclose the secret carriers in the spread spectrum case. Simulations assess the security levels derived in the theoretical part of the article.

INTRODUCTION

Digital watermarking studies have always been driven by the improvement of robustness. Most of articles of this field deal with this criterion, presenting more and more impressive experimental assessments. Some key events in this quest are the use of spread spectrum the invention of resynchronization schemes the discovery of side information channel and the formulation of the opponent actions as a game On the contrary, security received little attention in the watermarking community. The first difficultyis that security and robustness are neighboring concepts, which are hardly perceived as different. The intentionality behind the attack is not enough to make a clear cut between these two concepts. An image compression is clearly an attack related to robustness, but it might happen intentionally, i.e. with the purpose of removing the watermark, or not. Robust watermarking is defined in as a communication channel multiplexed into original content in a non-perceptible way, and whose “capacity degrades as a smooth function of the degradation of the marked content”. We add that the degradation is due to a classical content processing (compression, low-pass filtering, noise addition, geometric attack The attacker has three known strategies to defeat watermark robustness: to remove enough watermark signal energy, to jam the hidden communication channel, or to desynchronize the watermarked content. T. Kalker then defines watermarking security as “the inability by unauthorized users to access [i.e. to remove, to read, or to write the hidden message] the communication channel” established by a robust watermarking. Security deals with intentional attacks whose aims are not only the removal of the watermark signal, excluding those already encompassed in the robustness category since the watermarking
technique is assumed to be robust.
  




Applications of Digital Image Processing

Image sharpening and restoration

Image sharpening and restoration refers here to process images that have been captured from the modern camera to make them a better image or to manipulate those images in way to achieve desired result. It refers to do what Photoshop usually does.This includes Zooming, blurring , sharpening , gray scale to color conversion, detecting edges and vice versa , Image retrieval and Image recognition. The common examples are:

Fig, Orignal image


Fig. Zoomed image

Fig.Blur image

Fig.Sharp image

Fig.Edges


REFERENCE ARTICLES 



  1. Obstacle Detection and Collision Avoidance fora UAV With Complementary Low-Cost Sensors. 
  2. TimeOptimal Maneuver Planning in Automatic Parallel Parking Using a SimultaneousDynamic Optimization Approach.
  3.  Privacy-Assured Outsourcing of ImageReconstruction Service in Cloud.
  4. Query-Adaptive Image Search with Hash Codes.














Tuesday, July 24, 2018

Artificial Intelligence

                                                  

ARTIFICIAL INTELLIGENCE

Sai Info solution provide the Project Development & Training. We Develop Project for BE/ME/PHD. Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving”. The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology. Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery network and military simulations.

Today will see one example application of Artificial intelligence in Tracking in Low Frame Rate Video a Cascade Particle Filter


Tracking in Low Frame Rate Video a Cascade Particle Filter


 ABSTRACT

Tracking objects in low frame rate (LFR) video or with abrupt motion poses two main difficulties which most conventional tracking methods can hardly handle: 1) poor motion continuity and increased search space and 2) fast appearance variation of target and more background clutter due to increased search space. In this paper, we address the problem from a view which integrates conventional tracking and detection and present a temporal probabilistic combination of discriminative observers of different life spans. Each observer is learned from different ranges of samples, with different subsets of features, to achieve varying levels of discriminative power at varying costs. An efficient fusion and temporal inference is then done by a cascade particle filter which consists of multiple stages of importance sampling. Experiments show significantly improved accuracy of the proposed approach in comparison with existing tracking methods, under the condition of LFR data and abrupt motion of both target and camera. 


INTRODUCTION

Tracking in low frame rate (LFR) video is a practical requirement of many of today’s real-time applications such as in micro embedded systems and visual surveillance. The reason is various: hardware costs, LFR data source, online processing speed which upper-bounds the frame rate, etc. Moreover, for a tracking system, LFR condition is equivalent to abrupt motion, which is often encountered but hard to cope with. Although the body of literature regarding tracking is huge, most existing approaches (except a few categories) cannot be readily applied to LFR tracking problems, either because of the slow speed or the vulnerability to motion and appearance discontinuity caused by LFR data. 

Figure 1. Tracking 4 consecutive frames in a 5pfs video

The key notion of our solution is that detection and tracking can be integrated to overcome this difficulty. As two extremes, conventional tracking facilitates itself with every possible assumption of temporal continuity, while detection aims at the universal description or discrimination of the target from the others. In LFR tracking, the continuity of tar-get is often too weak for conventional tracking (Figure 1);meanwhile, applying reliable detection over a large search space is often unaffordable, neither is it capable of identifying target through frames due to neglect of context.









   REFERENCE ARTICLES 
  1. Design of Variable Vehicle Handling Characteristics Using Four-Wheel Steer-by-Wire
  2. Decentralized RFID Coverage Algorithms with Applications for the Reader Collisions Avoidance Problem
  3. An Efficient Privacy-Preserving Ranked Keyword Search Method
  4. Next Generation Data Classification and Linkage
  5. Anomaly Detection via Online Oversampling Principal Component Analysis



If anyone is interested for doing Research in above subject for BTech/MTech/PHD Engineering project work
Kindly Contact Below

Contact Details:
Santosh Gore Sir
Ph:09096813348 / 8446081043 / 0253-6644344
Email: sai.info2009@gmail.com 








Monday, July 23, 2018

                           
                        INTERNET OF THINGS (IOT)

Sai Info solution provide the Project Development & Training. We Develop Project for BE/ME/PHD. Internet of Things (IOT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect and exchange data,  creating opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions. The number of Iot devices increased 31% year-over-year to 8.4 billion in 2017 and it is estimated that there will be 30 billion devices by 2020.The global market value of Iot is projected to reach $7.1 trillion by 2020. IOT involves extending internet connectivity beyond standard devices, such as desktops, laptops, smart phones and tablets, to any range of traditionally dumb or non-internet-enabled physical devices and everyday objects. Embedded with technology, these devices can communicate and interact over the internet, and they can be remotely monitored and controlled.


Today will see one example  application of IOT in real time road traffic vehicular network for signal Automation and Monitoring. 

    Adaptive traffic signal control with vehicular Ad Hoc networks


INTRODUCTION

we propose to use vehicular ad hoc networks (VANETs) to collect and aggregate real-time speed and position information on individual vehicles to optimize signal control at traffic intersections. We first formulate the vehicular traffic signal control problem as a job scheduling problem on processors, with jobs corresponding to platoons of vehicles. Under the assumption that all jobs are of equal size, we give an online algorithm, referred to as the oldest job first (OJF) algorithm, to minimize the delay across the intersection. We prove that the OJF algorithm is 2-competitive, implying that the delay is less than or equal to twice the delay of an optimal offline schedule with perfect knowledge of the arrivals. We then show how a VANET can be used to group vehicles into approximately equal-sized platoons, which can then be scheduled using OJF.


Fig. VANET-based traffic signal control 
 We call this the two-phase approach, where we first group the vehicular traffic into platoons and then apply the OJF algorithm, i.e., the oldest arrival first (OAF) algorithm. Our simulation results show that, under light and medium traffic loads, the OAF algorithm reduces the delays experienced by vehicles as they pass through the intersection, as compared with vehicle-actuated methods, Webster's method, and pretimed signal control methods. Under heavy vehicular traffic load, the OAF algorithm performs the same as the vehicle-actuated traffic method but still produces lower delays, as when compared with Webster's method and the pretimed signal control method.


  VANET APPLICATION

The speed and location information on vehicles that can be disseminated to the traffic signal controller using VANETs  are both spatially and temporally fine-grained. Such precise per- vehicle speed and location information can enable additional capabilities such as being able to predict the time instance when vehicles will reach the stop line of the intersection. This is in comparison with roadside sensors such as loop detectors that can only detect the presence or absence of vehicles and, at best estimate, the size of vehicle queues. Furthermore, it is cheaper to equip vehicles with wireless devices than to install roadside equipment




                                          REFERENCE ARTICLES 


If anyone is interested for doing Research in above subject for BTech/MTech/PHD Engineering project work
Kindly Contact Below

Contact Details:
Santosh Gore Sir
Ph:09096813348 / 8446081043 / 0253-6644344
Email: sai.info2009@gmail.com 

Sensor ventilator for COVID-19 The COVID-19 virus is spread by direct contact with an infected person's respiratory droplets (generated ...