993 resultados para Feature construction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O trabalho aborda o estudo e o desenvolvimento de um interferômetro sensor de alta tensão, baseado em célula Pockels (modulador eletro-óptico) na topologia reflexiva (\"double pass\") e que é parte integrante de um Transformador de Potencial Óptico (TPO), que utiliza sistema interferométrico de luz branca (WLI-White Light Interferometry), que está sendo desenvolvido pelo grupo do Laboratório de Sensores Ópticos (LSO) do PEA-EPUSP, e é capaz de medir diretamente tensões presentes em sistema elétrico de potência (SEP) classe 69kVRMS. Para desenvolver o tema proposto foi feita uma revisão da literatura baseada em livros, artigos e teses para identificar topologias em moduladores eletro-ópticos transmissiva (\"single pass\") e reflexiva (\"double pass\") para definir o tipo de modulador mais adequado para a aplicação em questão. A partir dos estudos e implementações realizadas, verificou-se um enorme potencial para o desenvolvimento e aplicação da topologia \"double pass\" no sensor interferométrico da célula de alta tensão do TPO. A topologia mostrou-se vantajosa em relação aos protótipos dos TPOs desenvolvidos anteriormente, a partir de características tais como: a facilidade de recurso de alinhamento do feixe de luz, construção e reprodução relacionados ao cristal eletro-óptico, diminuição do número de componentes ópticos volumétricos e aumento da rigidez dielétrica da célula sensora. Simulações computacionais foram realizadas mediante a aplicação do método dos elementos finitos (MEF) que contribuíram para o auxílio do projeto da célula sensora, particularmente, para estimativa do valor da voltagem de meia onda, V?, parâmetro importante para o projeto do TPO. Um protótipo do TPO com célula sensora de alta tensão reflexiva foi implementado e testado no laboratório de alta tensão do IEEUSP a partir de ensaios com tensões nominais de 69kVrms a 60Hz e máxima de 140kVrms a 60 Hz. Como resultado deste trabalho, amplia-se o conhecimento e domínio das técnicas de construção de interferômetros sensores de alta tensão na topologia reflexiva aplicadas a TPOs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software product lines (SPL) are diverse systems that are developed using a dual engineering process: (a)family engineering defines the commonality and variability among all members of the SPL, and (b) application engineering derives specific products based on the common foundation combined with a variable selection of features. The number of derivable products in an SPL can thus be exponential in the number of features. This inherent complexity poses two main challenges when it comes to modelling: Firstly, the formalism used for modelling SPLs needs to be modular and scalable. Secondly, it should ensure that all products behave correctly by providing the ability to analyse and verify complex models efficiently. In this paper we propose to integrate an established modelling formalism (Petri nets) with the domain of software product line engineering. To this end we extend Petri nets to Feature Nets. While Petri nets provide a framework for formally modelling and verifying single software systems, Feature Nets offer the same sort of benefits for software product lines. We show how SPLs can be modelled in an incremental, modular fashion using Feature Nets, provide a Feature Nets variant that supports modelling dynamic SPLs, and propose an analysis method for SPL modelled as Feature Nets. By facilitating the construction of a single model that includes the various behaviours exhibited by the products in an SPL, we make a significant step towards efficient and practical quality assurance methods for software product lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature modeling, embebbed software, software product lines, tool support

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a georeferenced photomosaic of the Lucky Strike hydrothermal vent field (Mid-Atlantic Ridge, 37°18’N). The photomosaic was generated from digital photographs acquired using the ARGO II seafloor imaging system during the 1996 LUSTRE cruise, which surveyed a ~1 km2 zone and provided a coverage of ~20% of the seafloor. The photomosaic has a pixel resolution of 15 mm and encloses the areas with known active hydrothermal venting. The final mosaic is generated after an optimization that includes the automatic detection of the same benthic features across different images (feature-matching), followed by a global alignment of images based on the vehicle navigation. We also provide software to construct mosaics from large sets of images for which georeferencing information exists (location, attitude, and altitude per image), to visualize them, and to extract data. Georeferencing information can be provided by the raw navigation data (collected during the survey) or result from the optimization obtained from imatge matching. Mosaics based solely on navigation can be readily generated by any user but the optimization and global alignment of the mosaic requires a case-by-case approach for which no universally software is available. The Lucky Strike photomosaics (optimized and navigated-only) are publicly available through the Marine Geoscience Data System (MGDS, http://www.marine-geo.org). The mosaic-generating and viewing software is available through the Computer Vision and Robotics Group Web page at the University of Girona (http://eia.udg.es/_rafa/mosaicviewer.html)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Rock Island Centennial Bridge spanning the Mississippi River between Rock Island, Illinois and Davenport, Iowa was opened to traffic on July 12, 1940. It is a thoroughly modern, four-lane highway bridge, adequate in every respect for present day high speed passenger and transport traffic. The structure is ideally situated to provide rapid transit between the business districts of Rock Island and Davenport and serves not only the local or shuttle traffic in the Tri-City Area, but also heavy through motor travel on U.S. Highways 67 and 150. The Centennial Bridge is notable in several respects. The main spans are box girder rib tied arches, a type rather unusual in America and permitting simplicity in design with pleasing appearance. The Centennial Bridge is the only bridge across the Mississippi providing for four lanes of traffic with separation of traffic in each direction. It is a toll bridge operating alongside a free bridge and has the lowest rates of toll of any toll bridge on the Mississippi River. It was financed entirely by the City of Rock Island with no obligation on the taxpayers; there was no federal or state participation in the financing. But perhaps the most outstanding feature of the new bridge is its great need. A few remarks on the communities served by the new structure, the services rendered, and some statistics on cross-river traffic in the Tri-City Area will emphasize the reasons for constructing the Centennial Bridge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the emplacement and growth of intrusive bodies in terms of mechanism, duration, ther¬mal evolution and rates are fundamental aspects of crustal evolution. Recent studies show that many plutons grow in several Ma by in situ accretion of discrete magma pulses, which constitute small-scale magmatic reservoirs. The residence time of magmas, and hence their capacities to interact and differentiate, are con¬trolled by the local thermal environment. The latter is highly dependant on 1) the emplacement depth, 2) the magmas and country rock composition, 3) the country rock thermal conductivity, 4) the rate of magma injection and 5) the geometry of the intrusion. In shallow level plutons, where magmas solidify quickly, evi¬dence for magma mixing and/or differentiation processes is considered by many authors to be inherited from deeper levels. This work shows however that in-situ differentiation and magma interactions occurred within basaltic and felsic sills at shallow depth (0.3 GPa) in the St-Jean-du-Doigt (SJDD) bimodal intrusion, France. This intrusion emplaced ca. 347 Ma ago (IDTIMS U/Pb on zircon) in the Precambrian crust of the Armori- can massif and preserves remarkable sill-like emplacement processes of bimodal mafic-felsic magmas. Field evidence coupled to high precision zircon U-Pb dating document progressive thermal maturation within the incrementally built ioppolith. Early m-thick mafic sills (eastern part) form the roof of the intrusion and are homogeneous and fine-grained with planar contacts with neighboring felsic sills; within a minimal 0.8 Ma time span, the system gets warmer (western part). Sills are emplaced by under-accretion under the old east¬ern part, interact and mingle. A striking feature of this younger, warmer part is in-situ differentiation of the mafic sills in the top 40 cm of the layer, which suggests liquids survival in the shallow crust. Rheological and thermal models were performed in order to determine the parameters required to allow this observed in- situ differentiation-accumulation processes. Strong constraints such as total emplacement durations (ca. 0.8 Ma, TIMS date) and pluton thickness (1.5 Km, gravity model) allow a quantitative estimation of the various parameters required (injection rates, incubation time,...). The results show that in-situ differentiation may be achieved in less than 10 years at such shallow depth, provided that: (1) The differentiating sills are injected beneath consolidated, yet still warm basalt sills, which act as low conductive insulating screens (eastern part formation in the SJDD intrusion). The latter are emplaced in a very short time (800 years) at high injection rate (0.5 m/y) in order to create a "hot zone" in the shallow crust (incubation time). This implies that nearly 1/3 of the pluton (400m) is emplaced by a subsequent and sustained magmatic activity occurring on a short time scale at the very beginning of the system. (2) Once incubation time is achieved, the calculations show that a small hot zone is created at the base of the sill pile, where new injections stay above their solidus T°C and may interact and differentiate. Extraction of differentiated residual liquids might eventually take place and mix with newly injected magma as documented in active syn-emplacement shear-zones within the "warm" part of the pluton. (3) Finally, the model show that in order to maintain a permanent hot zone at shallow level, injection rate must be of 0.03 m/y with injection of 5m thick basaltic sills eveiy 130yr, imply¬ing formation of a 15 km thick pluton. As this thickness is in contradiction with the one calculated for SJDD (1.5 Km) and exceed much the average thickness observed for many shallow level plutons, I infer that there is no permanent hot zone (or magma chambers) at such shallow level. I rather propose formation of small, ephemeral (10-15yr) reservoirs, which represent only small portions of the final size of the pluton. Thermal calculations show that, in the case of SJDD, 5m thick basaltic sills emplaced every 1500 y, allow formation of such ephemeral reservoirs. The latter are formed by several sills, which are in a mushy state and may interact and differentiate during a short time.The mineralogical, chemical and isotopic data presented in this study suggest a signature intermediate be¬tween E-MORB- and arc-like for the SJDD mafic sills and feeder dykes. The mantle source involved produced hydrated magmas and may be astenosphere modified by "arc-type" components, probably related to a sub¬ducting slab. Combined fluid mobile/immobile trace elements and Sr-Nd isotopes suggest that such subduc¬tion components are mainly fluids derived from altered oceanic crust with minor effect from the subducted sediments. Close match between the SJDD compositions and BABB may point to a continental back-arc setting with little crustal contamination. If so, the SjDD intrusion is a major witness of an extensional tectonic regime during the Early-Carboniferous, linked to the subduction of the Rheno-Hercynian Ocean beneath the Variscan terranes. Also of interest is the unusual association of cogenetic (same isotopic compositions) K-feldspar A- type granite and albite-granite. A-type granites may form by magma mixing between the mafic magma and crustal melts. Alternatively, they might derive from the melting of a biotite-bearing quartz-feldspathic crustal protolith triggered by early mafic injections at low crustal levels. Albite-granite may form by plagioclase cu¬mulate remelting issued from A-type magma differentiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This picture shows construction work underway on the Thistle Complex. The groundwork, supporting pillars, and preliminary work on the concrete walls can be seen. The most notable feature is the exposed foundation of the Thistle Theatre to the left. The adjacent lecture halls are also taking form to the right.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This picture shows construction work underway on the Thistle Complex. The groundwork, supporting pillars, and concrete walls can be seen. The most notable feature is the exposed foundation of the Thistle Theatre to the left. The adjacent lecture halls are also taking form to the right.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulatory agencies such as Europol, Frontex, Eurojust, CEPOL as well as bodies such as OLAF, have over the past decade become increasingly active within the institutional architecture constituting the EU’s Area of Freedom, Security and Justice and are now placed at the forefront of implementing and developing the EU’s internal security model. A prominent feature of agency activity is the large-scale proliferation of ‘knowledge’ on security threats via the production of policy tools such as threat assessments, risk analyses, periodic and situation reports. These instruments now play a critical role in providing the evidence-base that supports EU policymaking, with agency-generated ‘knowledge’ feeding political priority setting and decision-making within the EU’s new Internal Security Strategy (ISS). This paper examines the nature and purpose of knowledge generated by EU Home Affairs agencies. It asks where does this knowledge originate? How does it measure against criteria of objectivity, scientific rigour, reliability and accuracy? And how is it processed in order to frame threats, justify actions and set priorities under the ISS?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat waves are expected to increase in frequency and magnitude with climate change. The first part of a study to produce projections of the effect of future climate change on heat-related mortality is presented. Separate city-specific empirical statistical models that quantify significant relationships between summer daily maximum temperature (T max) and daily heat-related deaths are constructed from historical data for six cities: Boston, Budapest, Dallas, Lisbon, London, and Sydney. ‘Threshold temperatures’ above which heat-related deaths begin to occur are identified. The results demonstrate significantly lower thresholds in ‘cooler’ cities exhibiting lower mean summer temperatures than in ‘warmer’ cities exhibiting higher mean summer temperatures. Analysis of individual ‘heat waves’ illustrates that a greater proportion of mortality is due to mortality displacement in cities with less sensitive temperature–mortality relationships than in those with more sensitive relationships, and that mortality displacement is no longer a feature more than 12 days after the end of the heat wave. Validation techniques through residual and correlation analyses of modelled and observed values and comparisons with other studies indicate that the observed temperature–mortality relationships are represented well by each of the models. The models can therefore be used with confidence to examine future heat-related deaths under various climate change scenarios for the respective cities (presented in Part 2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article extends the traditions of style-based criticism through an encounter with the insights that can be gained from engaging with filmmakers at work. By bringing into relationship two things normally thought of as separate: production history and disinterested critical analysis, the discussion aims to extend the subjects which criticism can appreciate as well as providing some insights on the creative process. Drawing on close analysis, on observations made during fieldwork and on access to earlier cuts of the film, this article looks at a range of interrelated decision-making anchored by the reading of a particular sequence. The article examines changes the film underwent in the different stages of production, and some of the inventions deployed to ensure key themes and ideas remained in play, as other elements changed. It draws conclusions which reveal perspectives on the filmmaking process, on collaboration, and on the creative response to material realities. The article reveals elements of the complexity of the process of the construction of image and soundtrack, and extends the range of filmmakers’ choices which are part of a critical dialogue. Has a relationship to ‘Sleeping with half open eyes: dreams and realities in The Cry of the Owl’, Movie: A Journal of Film Criticism, 1, (2010) which provides a broader interpretative context for the enquiry.