The trend is definitely moving towards cloud computing, in which supercomputer capability will be available in bursts to anyone, in which case Watson-like capability would be available to the average user much sooner. I do expect the type of natural ...
–– Ray Kurzweil,
“The Significance of Watson”: 2011"
Though India has produced many experts in the field of Computer Science, only a small fraction of research work is conducted in India. The challenges to India include improving the quality of its undergraduate programs as well as enrollment of numbers in the research programs in computing. College enrollment numbers have multiplied in the last decade but the quality lags. Trained college teachers are in great shortage and radical ideas are needed to create them. A scalable master’s-level course with sufficient research exposure can be designed to create a teaching force. Creating networked research groups from institutions in India and abroad is a way to approach this challenge. It has been a long way before India overcomes some of these problems.
In the Fourth International Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse (PAN 10), the effectiveness of the plagiarism detectors was analyzed. It is noted that only a subset of the suspicious documents actually contains plagiarism cases, and that for some cases the sources are unavailable. An important property of the plagiarism cases is their degree of obfuscation, which can be understood as a kind of paraphrasing to disguise the plagiarism attempt; plagiarists often rewrite their source passages in order to render the detection more difficult. In plagiarism detection, one distinguishes between external and intrinsic detection situations: within the external situation the source document for a plagiarized document can be found at the detector’s disposal; within the intrinsic situation only the plagiarized document itself is given, and the detector looks for conspicuous writing style changes. The performance of a plagiarism detector is quantified by the well-known measures— precision and recall—supplemented by a third measure called granularity, which accounts for the fact that detectors sometimes report overlapping or multiple detections for a single plagiarism case.
The first paper, “Service-Oriented Architecture: Risks and Remedies”, by Deepa V Jose and Smitha Vinod, briefs on methods to tackle the problems faced by Service-Oriented Architecture (SOA). The paper is an outcome of the efforts to study SOA, its efficiency and drawbacks. SOA is a boon to develop extremely scalable applications. With the adoption of dynamic caching techniques and strong security measures, the future is going to experience the power of SOA.
The second paper, “Improvements to First-Come-First-Served Multiprocessor Scheduling with Gang Scheduling”, by R Siyambalapitiya and M Sandirigama, proposes improved algorithms for the multiprocessor job scheduling problem, based on first-come-first-served strategy. Backfilling technique is used to improve the performance of the proposed algorithm. The results of the proposed algorithms are presented using a percentage gap from a lower bound. Proposing a tighter lower bound could be considered as a future direction of research.
The third paper, “Multi-Class Manifold Preserving Isomap Using Sammon’s Projection”, by Shashwati Mishra and Chittaranjan Pradhan, concentrates on
multi-class manifold geometry preservation. Like MDS, Sammon’s mapping tries to preserve the manifold geometry by minimizing the Sammon’s stress, whereas Sammon’s projection gives better result in preserving small distances. The authors have applied Sammon’s algorithm instead of MDS for embedding of information in the lower dimension in the final step of Isomap and obtained a more clear output. In future, the proposal is to obtain a clearer and faster approach of preserving multi-class manifold geometry.
The fourth paper, “An Optimizing Compiler for Turing Machine Description Language”, by Pinaki Chakraborty, Shweta Taneja, Nandita Anand, Anupama Jha, Diksha Malik, and Ankit Nayar, presents a two-pass optimizing compiler for the Turing Machine Description Language. Further to the experiment conducted, it was observed that the optimizing compiler produces object programs that are up to 1.784 times shorter and 1.032 times faster than those produced by an existing compiler that does not employ code optimization.
The fifth paper, “A Multivalued Dependency-Based Normalization Approach for Symbolic Relational Databases”, by Deepa S, focuses on design of symbolic multivalued dependency, a form of data dependency for the symbolic relational databases based on fuzzy concepts. The paper aims at developing higher-level normal forms for the design of symbolic relational databases. Based on the concept of symbolic multivalued dependency, the fourth level of normalization defined for the relational model is extended into the symbolic relational database environment. It is concluded that symbolic relational databases are extensions of the classical relational databases with an objective of representing data which is imprecise, ambiguous and incomplete in nature.
The last paper of the issue, “Identifying Relevant Snippets from Ranked Web Documents”, by Shanmugasundaram Hariharan, Thirunavukarasu Ramkumar and Selva Muthukumaran, focuses on identifying text snippets for the retrieved web results using statistical approaches. The study results presented in this paper pertain to identification of snippet words so as to form a snippet tree. The authors conducted experiments based Google search engine, which presented promising results. Further, normalization technique was applied to prevent the unwanted words climbing up the order and to prevent the spam keywords being selected.
-- C R K Prasad
Consulting Editor |