Percorrer por autor "Silva, Joaquim F."
A mostrar 1 - 3 de 3
Resultados por página
Opções de ordenação
- An n-gram cache for large-scale parallel extraction of multiword relevant expressions with LocalMaxsPublication . Gonçalves, Carlos; Silva, Joaquim F.; Cunha, José C.LocalMaxs extracts relevant multiword terms based on their cohesion but is computationally intensive, a critical issue for very large natural language corpora. The corpus properties concerning n-gram distribution determine the algorithm complexity and were empirically analyzed for corpora up to 982 million words. A parallel LocalMaxs implementation exhibits almost linear relative efficiency, speedup, and sizeup, when executed with up to 48 cloud virtual machines and a distributed key-value store. To reduce the remote data communication, we present a novel n-gram cache with cooperative-based warm-up, leading to reduced miss ratio and time penalty. A cache analytical model is used to estimate the performance of cohesion calculation of n-gram expressions, based on corpus empirical data. The model estimates agree with the real execution results.
- A parallel algorithm for statistical multiword term extraction from very large corporaPublication . Gonçalves, Carlos; Silva, Joaquim F.; Cunha, Jose Alberto C.Multi-word Relevant Expressions (REs) can be defined as sequences of words (n-grams) with strong semantic meaning, such as "ice melting" and "Ministere des Affaires Etrangeres", useful in Information Retrieval, Document Clustering or Classification and Indexing of Documents. The need of extracting REs in several languages led research on statistical approaches rather than symbolic methods, since the former allow language-independence. Based on the assumption that REs have strong cohesion between their consecutive n-grams, the LocalMaxs algorithm is a language independent approach that extracts REs. Apart from its good precision, this extractor is time-consuming, being inoperable for Big Data if implemented in a sequential manner. This paper presents the first parallel and distributed version of this algorithm, achieving almost linear speedup and sizeup when processing corpora up to 1 billion words, using up to 54 virtual machines in a public cloud. This parallel version of the algorithm explores the statistical knowledge of the n-grams in the corpus, to promote the locality of the references.
- A theoretical model for n-gram distribution in big data corporaPublication . Silva, Joaquim F.; Gonçalves, Carlos Jorge de Sousa; Cunha, José C.There is a wide diversity of applications relying on the identification of the sequences of n consecutive words (n-grams) occurring in corpora. Many studies follow an empirical approach for determining the statistical distribution of the n-grams but are usually constrained by the corpora sizes, which for practical reasons stay far away from Big Data. However, Big Data sizes imply hidden behaviors to the applications, such as extraction of relevant information from Web scale sources. In this paper we propose a theoretical approach for estimating the number of distinct n-grams in each corpus. It is based on the Zipf-Mandelbrot Law and the Poisson distribution, and it allows an efficient estimation of the number of distinct 1-grams, 2-grams, 6-grams, for any corpus size. The proposed model was validated for English and French corpora. We illustrate a practical application of this approach to the extraction of relevant expressions from natural language corpora, and predict its asymptotic behaviour for increasingly large sizes.
