967 resultados para explicit categorization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate whether conventional text categorization methods may suffice to infer different verbal intelligence levels. This research goal relies on the hypothesis that the vocabulary that speakers make use of reflects their verbal intelligence levels. Automatic verbal intelligence estimation of users in a spoken language dialog system may be useful when defining an optimal dialog strategy by improving its adaptation capabilities. The work is based on a corpus containing descriptions (i.e. monologs) of a short film by test persons yielding different educational backgrounds and the verbal intelligence scores of the speakers. First, a one-way analysis of variance was performed to compare the monologs with the film transcription and to demonstrate that there are differences in the vocabulary used by the test persons yielding different verbal intelligence levels. Then, for the classification task, the monologs were represented as feature vectors using the classical TF–IDF weighting scheme. The Naive Bayes, k-nearest neighbors and Rocchio classifiers were tested. In this paper we describe and compare these classification approaches, define the optimal classification parameters and discuss the classification results obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe new results and improvements to a lan-guage identification (LID) system based on PPRLM previously introduced in [1] and [2]. In this case, we use as parallel phone recognizers the ones provided by the Brno University of Technology for Czech, Hungarian, and Russian lan-guages, and instead of using traditional n-gram language models we use a lan-guage model that is created using a ranking with the most frequent and discrim-inative n-grams. In this language model approach, the distance between the ranking for the input sentence and the ranking for each language is computed, based on the difference in relative positions for each n-gram. This approach is able to model reliably longer span information than in traditional language models obtaining more reliable estimations. We also describe the modifications that we have being introducing along the time to the original ranking technique, e.g., different discriminative formulas to establish the ranking, variations of the template size, the suppression of repeated consecutive phones, and a new clus-tering technique for the ranking scores. Results show that this technique pro-vides a 12.9% relative improvement over PPRLM. Finally, we also describe re-sults where the traditional PPRLM and our ranking technique are combined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to use the predictive control to take advantage of the future information in order to improve the reference tracking. The control attempts to increase the bandwidth of the conventional regulators by using the future information of the reference, which is supposed to be known in advance. A method for designing a controller is also proposed. A comparison in simulation with a conventional regulator is made controlling a four-phase Buck converter. Advantages and disadvantages are analyzed based on simulation results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growth of the Internet has increased the need for scalable congestion control mechanisms in high speed networks. In this context, we propose a rate-based explicit congestion control mechanism with which the sources are provided with the rate at which they can transmit. These rates are computed with a distributed max-min fair algorithm, SLBN. The novelty of SLBN is that it combines two interesting features not simultaneously present in existing proposals: scalability and fast convergence to the max-min fair rates, even under high session churn. SLBN is scalable because routers only maintain a constant amount of state information (only three integer variables per link) and only incur a constant amount of computation per protocol packet, independently of the number of sessions that cross the router. Additionally, SLBN does not require processing any data packet, and it converges independently of sessions' RTT. Finally, by design, the protocol is conservative when assigning rates, even in the presence of high churn, which helps preventing link overshoots in transient periods. We claim that, with all these features, our mechanism is a good candidate to be used in real deployments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the high dependence of photovoltaic energy efficiency on environmental conditions (temperature, irradiation...), it is quite important to perform some analysis focusing on the characteristics of photovoltaic devices in order to optimize energy production, even for small-scale users. The use of equivalent circuits is the preferred option to analyze solar cells/panels performance. However, the aforementioned small-scale users rarely have the equipment or expertise to perform large testing/calculation campaigns, the only information available for them being the manufacturer datasheet. The solution to this problem is the development of new and simple methods to define equivalent circuits able to reproduce the behavior of the panel for any working condition, from a very small amount of information. In the present work a direct and completely explicit method to extract solar cell parameters from the manufacturer datasheet is presented and tested. This method is based on analytical formulation which includes the use of the Lambert W-function to turn the series resistor equation explicit. The presented method is used to analyze commercial solar panel performance (i.e., the current-voltage–I-V–curve) at different levels of irradiation and temperature. The analysis performed is based only on the information included in the manufacturer’s datasheet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the high dependence of photovoltaic energy efficiency on environmental conditions (temperature, irradiation...), it is quite important to perform some analysis focusing on the characteristics of photovoltaic devices in order to optimize energy production, even for small-scale users. The use of equivalent circuits is the preferred option to analyze solar cells/panels performance. However, the aforementioned small-scale users rarely have the equipment or expertise to perform large testing/calculation campaigns, the only information available for them being the manufacturer datasheet. The solution to this problem is the development of new and simple methods to define equivalent circuits able to reproduce the behavior of the panel for any working condition, from a very small amount of information. In the present work a direct and completely explicit method to extract solar cell parameters from the manufacturer datasheet is presented and tested. This method is based on analytical formulation which includes the use of the Lambert W-function to turn the series resistor equation explicit. The presented method is used to analyze the performance (i.e., the I - V curve) of a commercial solar panel at different levels of irradiation and temperature. The analysis performed is based only on the information included in the manufacturer's datasheet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decades, software systems have become an intrinsic element in our daily lives. Software exists in our computers, in our cars, and even in our refrigerators. Today’s world has become heavily dependent on software and yet, we still struggle to deliver quality software products, on-time and within budget. When searching for the causes of such alarming scenario, we find concurrent voices pointing to the role of the project manager. But what is project management and what makes it so challenging? Part of the answer to this question requires a deeper analysis of why software project managers have been largely ineffective. Answering this question might assist current and future software project managers in avoiding, or at least effectively mitigating, problematic scenarios that, if unresolved, will eventually lead to additional failures. This is where anti-patterns come into play and where they can be a useful tool in identifying and addressing software project management failure. Unfortunately, anti-patterns are still a fairly recent concept, and thus, available information is still scarce and loosely organized. This thesis will attempt to help remedy this scenario. The objective of this work is to help organize existing, documented software project management anti-patterns by answering our two research questions: · What are the different anti-patterns in software project management? · How can these anti-patterns be categorized?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comparación de los esquemas de integración temporal explícito e implícito, en la simulación del flujo sanguíneo y su interacción con la pared arterial. There are two major strategies in FSI coupling techniques: implicit and explicit. The general difference between these methodologies is how many times the data is exchanged between the fluid and solid domains at each FSI time-step. In both coupling strategies, the pressure values coming from fluid domain calculations at each time-step are exported to the solid domain, and consequently, the solid domain is analyzed with these imported forces. In contrast to the explicit coupling, in the implicit approach the fluid and solid domain’s data is exchanged several times until the convergence is achieved. Although this method may boost the numerical stabilization, it increases the computational cost due to the extra data exchanges. In cardiovascular simulations, depending on the analysis objectives, one may choose an explicit or implicit approach. In the current work, the advantage of an explicit coupling strategy is highlighted when simulation of pulsatile blood flow in elastic arteries is desired.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S) communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs) and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs). Our approach represents a solution that helps to efficiently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (un)subscription cost per node, while improving the global performance/throughput of the network of brokers without fltering the cost of SIENA-like topology changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mapping aboveground carbon density in tropical forests can support CO2 emissionmonitoring and provide benefits for national resource management. Although LiDAR technology has been shown to be useful for assessing carbon density patterns, the accuracy and generality of calibrations of LiDAR-based aboveground carbon density (ACD) predictions with those obtained from field inventory techniques should be intensified in order to advance tropical forest carbon mapping. Here we present results from the application of a general ACD estimation model applied with small-footprint LiDAR data and field-based estimates of a 50-ha forest plot in Ecuador?s Yasuní National Park. Subplots used for calibration and validation of the general LiDAR equation were selected based on analysis of topographic position and spatial distribution of aboveground carbon stocks. The results showed that stratification of plot locations based on topography can improve the calibration and application of ACD estimation using airborne LiDAR (R2 = 0.94, RMSE = 5.81 Mg?C? ha?1, BIAS = 0.59). These results strongly suggest that a general LiDAR-based approach can be used for mapping aboveground carbon stocks in western lowland Amazonian forests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human functional neuroimaging techniques provide a powerful means of linking neural level descriptions of brain function and cognition. The exploration of the functional anatomy underlying human memory comprises a prime example. Three highly reliable findings linking memory-related cognitive processes to brain activity are discussed. First, priming is accompanied by reductions in the amount of neural activation relative to naive or unprimed task performance. These reductions can be shown to be both anatomically and functionally specific and are found for both perceptual and conceptual task components. Second, verbal encoding, allowing subsequent conscious retrieval, is associated with activation of higher order brain regions including areas within the left inferior and dorsal prefrontal cortex. These areas also are activated by working memory and effortful word generation tasks, suggesting that these tasks, often discussed as separable, might rely on interdependent processes. Finally, explicit (intentional) retrieval shares much of the same functional anatomy as the encoding and word generation tasks but is associated with the recruitment of additional brain areas, including the anterior prefrontal cortex (right > left). These findings illustrate how neuroimaging techniques can be used to study memory processes and can both complement and extend data derived through other means. More recently developed methods, such as event-related functional MRI, will continue this progress and may provide additional new directions for research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of the ecosystem approach and models for the management of ocean marine resources requires easy access to standard validated datasets of historical catch data for the main exploited species. They are used to measure the impact of biomass removal by fisheries and to evaluate the models skills, while the use of standard dataset facilitates models inter-comparison. North Atlantic albacore tuna is exploited all year round by longline and in summer and autumn by surface fisheries and fishery statistics compiled by the International Commission for the Conservation of Atlantic Tunas (ICCAT). Catch and effort with geographical coordinates at monthly spatial resolution of 1° or 5° squares were extracted for this species with a careful definition of fisheries and data screening. In total, thirteen fisheries were defined for the period 1956-2010, with fishing gears longline, troll, mid-water trawl and bait fishing. However, the spatialized catch effort data available in ICCAT database represent a fraction of the entire total catch. Length frequencies of catch were also extracted according to the definition of fisheries above for the period 1956-2010 with a quarterly temporal resolution and spatial resolutions varying from 1°x 1° to 10°x 20°. The resolution used to measure the fish also varies with size-bins of 1, 2 or 5 cm (Fork Length). The screening of data allowed detecting inconsistencies with a relatively large number of samples larger than 150 cm while all studies on the growth of albacore suggest that fish rarely grow up over 130 cm. Therefore, a threshold value of 130 cm has been arbitrarily fixed and all length frequency data above this value removed from the original data set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of the ecosystem approach and models for the management of ocean marine resources requires easy access to standard validated datasets of historical catch data for the main exploited species. They are used to measure the impact of biomass removal by fisheries and to evaluate the models skills, while the use of standard dataset facilitates models inter-comparison. North Atlantic albacore tuna is exploited all year round by longline and in summer and autumn by surface fisheries and fishery statistics compiled by the International Commission for the Conservation of Atlantic Tunas (ICCAT). Catch and effort with geographical coordinates at monthly spatial resolution of 1° or 5° squares were extracted for this species with a careful definition of fisheries and data screening. In total, thirteen fisheries were defined for the period 1956-2010, with fishing gears longline, troll, mid-water trawl and bait fishing. However, the spatialized catch effort data available in ICCAT database represent a fraction of the entire total catch. Length frequencies of catch were also extracted according to the definition of fisheries above for the period 1956-2010 with a quarterly temporal resolution and spatial resolutions varying from 1°x 1° to 10°x 20°. The resolution used to measure the fish also varies with size-bins of 1, 2 or 5 cm (Fork Length). The screening of data allowed detecting inconsistencies with a relatively large number of samples larger than 150 cm while all studies on the growth of albacore suggest that fish rarely grow up over 130 cm. Therefore, a threshold value of 130 cm has been arbitrarily fixed and all length frequency data above this value removed from the original data set.