Retrieving this information can be difficult. Various software programs are being used in the current technological world to assist users to search and output results. These programs simplify the task of manually searching and acquiring information. This test plan has been written to communicate the approach used during the program creation. It includes the source files to be implemented, the source files functionalities, objectives, scope and the approach. In this assignment, I will clearly identify what is considered in and out of scope and the expected test deliverables.
The main objective is to develop an information retrieval engine that will be able to index a collection of documents and will be able to retrieve matching documents in response to a keyword query. The developed program will retrieve the information using the vector space model and will be written in Java. The must-have functionalities are considered the top priority during the design phase of the project.
During program development, all the requirements outlined will be incorporated and tested to ensure that the required output is achieved. Any other requirements that are to be included in the program will also be tested and the results analyzed to ensure their accuracy. At the end of the development stage, the program should;
In every program development phase, there are bound to be certain risks which unless well mitigated, could cause the whole program to fail. In our case, some of the risks identified are outlined below with their respective mitigation strategies.
Risk identified |
Mitigation strategy |
Complexity. |
Keep the program as simple as possible. Complex designs increase the probability of making an error during the implementation phase (Lewis, 2014). |
Accessibility. |
In the Java platform, the program can be denied access if the classes are not correctly initialized. To mitigate this, ensure that all the classes in the source files are well initialized prior to any sensitive operation. |
In this section, the assumptions specific to this project are outlined.
During the development of the program, various functionalities will be included to efficiently execute and retrieve the information. The plan involves creating the modules independently based on their specific function.
The program is called MySearchEngine. This contains the main method to initialize and execute the rest of the source files. In this code file, the searcher, inverted index, tokenizer, indexer and stemmer source files are declared. To compile this program, the user needs to execute the javac *.java runtime command within the source code location or directory.
In this source file, I will design the program to tokenize the raw query as instructed followed by a calculation of all the cosine similarities for all the documents that contain at least one query term. The dot-product query is then build up using query term then divided through by vector norms. Statements to acquire all the documents and their term frequencies for this query term are to be included. The next step is iterating through each document and make tf-IDF to take dot product then add to the previous value for a document or initialize the dot products Hashmap.
Then a function to add to query vector norm is implemented in the above if statement. Now functionalities to build up the cosine similarity scores for each document are written that use a priority queue to automatically store documents in max order. Finally, the source file should print out all the documents in order of cosine similarity.
This file indexes all the documents stored in collection collection_dir. The process involves constructing a constructor to scan all the documents first. The following function is the added to debug lineNumber++;
String line = scan.nextLine(); This statement reads one line of the text into a string which is then split by space into a string array using fileLines.add(line.trim());
All the frequencies for all corpus terms are then documented. For any given term in corpus, the number of documents it is in is shown. The files are in the form “term” that is “filename1, termfreq, fileName2, termFreq and so on.
the document filename is acquired and iteration through all the tokens in the document is done. A hashmap of term document frequencies will then be added. A functionality to add to term frequencies list hashmap will have to be executed to finish building collections from corpus. Write an inverted index out to file with appended IDF values at the end after iterating through all corpus tokens. The source file should finally calculate the IDF, the round IDF then build the final string to write to file. Each line should contain one stop word.
Calculation of the inverse document frequencies should be done using natural log. For the classical IDF formula, an increment of one should be implemented to allow query terms that are not present in the index. A repeating group that has to appear once will be indicated by {} but this character will not be present in my index. The vector norm value will be precomputed to use in the calculation of cosine similarity without taking the square root hence it’s the norm squared.
The source file generally converts a file into class data structures that have an inverted index that have document term frequencies. The source file will then read one line of the text file into a string and split the line by space into a string array. The last IDF part is taken away and all the term-frequency pairs in the document are taken away. A non-duplicate set of corpus names is then build up and added to document vector norm calculation.
This class will be used to transform a word into its root form. The input word can be provided a character at a time by calling add()) or can be done at once by calling one of the various stem(something) method (Schymik, 2012). After a word has been stemmed it has to be retrieved or reference to the internal buffer is retrieved. This is to be implemented using the Porter stemmer (Stemmer, 1980). Each indexed file will have their fields separated by commas, the lines will be separated by the end line character and all the non-quantity integers will be rounded off to three integers.
The name of the program should be the main method class which is MySearchEngine. Once compiled, all the incorporated source files should produce their respective class files that are part of the final MySearchEngine.class file. Once the program is developed and compiles successfully, a correct outcome is expected such that, after invoking the program from the command line with the parameters java MySearchEngine index collection_dir index_dir stopwords.txt, all the documents are stored in collection_dir. The name of the index file should be index.txt. The stopwords should be contained in the stopwords.txt file. These stopwords should not be considered for stemming into the index terms.
When the parameter java MySearchEngine search index_dir num_docs keyword_list is invoked, a ranked list of the top num_docs matching the keyword_list query. The document that is most relevant will appear top in the list separated by a white command line space.
The final token after tokenization is implemented will not contain any hyphen and any text that is within a single quotation will appear in a single token. Any acronym is to be retained as a single token absent of any period or full stop. Any other text should be split into tokens using delimiters.
Conclusion.
The program should be able to compute all source files effectively and return the expected results in the end. All the source files will be saved with a .java extension and their respective classes created after successful compilation. The program source files can be implemented using any java based platform or compiled from a command line interface. The information retrieval program was found to be effective and invoked all the parameters correctly, outputting the correct and expected results in the end. In conclusion, the program development was successful.
References
Lewis, N. (2014). Java-based malware: Mitigating the threat of JRE vulnerabilities. Retrieved from TechTarget Network: https://searchsecurity.techtarget.com/tip/Java-based-malware-Mitigating-the-threat-of-JRE-vulnerabilities
Schymik, G. (2012). The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval. Arizona: Arizona State University.
Stemmer, P. (1980). An Algorithm For Suffix Stripping. The program, 14(3), 130-137. Retrieved from https://www.tartarus.org/~martin/PorterStemmer
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.