905 resultados para Vehicle counting and classification


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two studies tested the hypothesis that preschool children's theory of mind ability is related to their levels of peer acceptance. In Study 1, 78 children between the ages of 4 and 6 provided peer nominations that allowed determination of social preference and social impact scores, and classification in one of five peer status groups (following Coie & Dodge, 1983). Children were also tested on five different theory of mind tasks. The results showed that theory of mind scores were significantly related to social preference scores in a subsample of children who were over 5 years old. Further, popular children were found to score higher on theory of mind tasks than children classified as rejected. Study 2 replicated and extended the first study with a new sample of 87 4- to 6-year-old children. Study 2 included measures of peer acceptance, theory of mind ability and verbal intelligence, as well as teacher ratings of prosocial and aggressive behaviours. The results of Study 2 showed that for the total group of children, prosocial behaviour was the best predictor of social preference scores. When the Study 2 sample was split into older and younger children, theory of mind ability was found to be the best predictor of social preference scores for the older children (over age 5), while aggressive and prosocial behaviours were the best predictors of peer acceptance in the younger children. Overall, the pattern of results suggests that the impact of theory of mind ability on peer acceptance is modest but increases with children's age.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By identifying energy waste streams in vehicles fuel consumption and introducing the concept of lean driving systems, a technological gap for reducing fuel consumption was identified. This paper proposes a solution to overcome this gap, through a modular vehicle architecture aligned with driving patterns. It does not address detailed technological solutions; instead it models the potential effects in fuel consumption through a modular concept of a vehicle and quantifies their dependence on vehicle design parameters (manifesting as the vehicle mass) and user behavior parameters (driving patterns manifesting as the use of a modular car in lighter and heavier mode, in urban and highway cycles). Modularity has been functionally applied in automotive industry as manufacture and assembly management strategies; here it is thought as a product development strategy for flexibility in use, driven by environmental concerns and enabled by social behaviors. The authors argue this concept is a step forward in combining technological solutions and social behavior, of which eco-driving is a vivid example, and potentially evolutionary to a lean, more sustainable, driving culture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An adequate supply of nutrients is essential for obtaining high yields of coffee. The objective of this study was to evaluate the effect of N, K and the N:K ratio on vegetative and reproductive growth of coffee. For this purpose, coffee plants were grown in nutrient solution containing K in the concentrations of 1.08; 2.15; 3.23 and 5.38 mmol L-1 combined with a dose of 6 mmol L-1 N, resulting in the N:K ratios (w/w): 1:0.5; 1:1; 1:1.5 and 1:2.5. The control treatment consisted of the doses 3 and 1.61 mmol L-1 of N and K respectively, resulting in the N:K ratio (w/w) 1.0:1.5. The following variables were evaluated: height, stem diameter, number of nodes of the eighth plagiotrofic branch (index branch), pairs of plagiotrofic branches and number of nodes in the orthotropic branch every three weeks from the beginning of the experiment. Additionally, it was evaluated the chemical composition of processed beans and leaves between the flowering and the rapid expansion stage of the cherry beans, production of cherry beans per plant and classification of beans according to the size. N influenced mainly the characteristics of vegetative growth and K influenced mainly the reproductive growth evaluated by the production. The lowest production resulted in the highest percentages of beans retained on sieves with holes larger than 16/64", while the highest production promoted an increase in the percentage of beans retained on sieves with holes smaller than 16/64".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os sistemas Computer-Aided Diagnosis (CAD) auxiliam a deteção e diferenciação de lesões benignas e malignas, aumentando a performance no diagnóstico do cancro da mama. As lesões da mama estão fortemente correlacionadas com a forma do contorno: lesões benignas apresentam contornos regulares, enquanto as lesões malignas tendem a apresentar contornos irregulares. Desta forma, a utilização de medidas quantitativas, como a dimensão fractal (DF), pode ajudar na caracterização dos contornos regulares ou irregulares de uma lesão. O principal objetivo deste estudo é verificar se a utilização concomitante de 2 (ou mais) medidas de DF – uma tradicionalmente utilizada, a qual foi designada por “DF de contorno”; outra proposta por nós, designada por “DF de área” – e ainda 3 medidas obtidas a partir destas, por operações de dilatação/erosão e por normalização de uma das medidas anteriores, melhoram a capacidade de caracterização de acordo com a escala BIRADS (Breast Imaging Reporting and Data System) e o tipo de lesão. As medidas de DF (DF contorno e DF área) foram calculadas através da aplicação do método box-counting, diretamente em imagens de lesões segmentadas e após a aplicação de um algoritmo de dilatação/erosão. A última medida baseia-se na diferença normalizada entre as duas medidas DF de área antes e após a aplicação do algoritmo de dilatação/erosão. Os resultados demonstram que a medida DF de contorno é uma ferramenta útil na diferenciação de lesões, de acordo com a escala BIRADS e o tipo de lesão; no entanto, em algumas situações, ocorrem alguns erros. O uso combinado desta medida com as quatro medidas propostas pode melhorar a classificação das lesões.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alzheimer Disease (AD) is characterized by progressive cognitive decline and dementia. Earlier diagnosis and classification of different stages of the disease are currently the main challenges and can be assessed by neuroimaging. With this work we aim to evaluate the quality of brain regions and neuroimaging metrics as biomarkers of AD. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox functionalities were used to study AD by T1weighted, Diffusion Tensor Imaging and 18FAV45 PET, with data obtained from the AD Neuroimaging Initiative database, specifically 12 healthy controls (CTRL) and 33 patients with early mild cognitive impairment (EMCI), late MCI (LMCI) and AD (11 patients/group). The metrics evaluated were gray-matter volume (GMV), cortical thickness (CThk), mean diffusivity (MD), fractional anisotropy (FA), fiber count (FiberConn), node degree (Deg), cluster coefficient (ClusC) and relative standard-uptake-values (rSUV). Receiver Operating Characteristic (ROC) curves were used to evaluate and compare the diagnostic accuracy of the most significant metrics and brain regions and expressed as area under the curve (AUC). Comparisons were performed between groups. The RH-Accumbens/Deg demonstrated the highest AUC when differentiating between CTRLEMCI (82%), whether rSUV presented it in several brain regions when distinguishing CTRL-LMCI (99%). Regarding CTRL-AD, highest AUC were found with LH-STG/FiberConn and RH-FP/FiberConn (~100%). A larger number of neuroimaging metrics related with cortical atrophy with AUC>70% was found in CTRL-AD in both hemispheres, while in earlier stages, cortical metrics showed in more confined areas of the temporal region and mainly in LH, indicating an increasing of the spread of cortical atrophy that is characteristic of disease progression. In CTRL-EMCI several brain regions and neuroimaging metrics presented AUC>70% with a worst result in later stages suggesting these indicators as biomarkers for an earlier stage of MCI, although further research is necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lysotypes, plasmidial profiles, and profiles of resistance to antimicrobial agents were determined in 111 Salmonella Typhimurium strains isolated from feces and blood of children treated in Rio de Janeiro and in Salvador. Six distinct lysotypes (19, 41, 97, 105, 120 and 193) were recognized, with a predominance of lysotype 193 (59.7%) in Rio de Janeiro and of phage type 105 (38.4) in Salvador. Approximately 86.7% of the lysotype 193 strains presented multiple resistance to more than six antimicrobial agents, whereas 93% of lysotype 105 strains were fully susceptible. More than 90% of the strains presented plasmids distributed into 36 different profiles in Rio de Janeiro and into 10 profiles in Salvador. A 40 MDa plasmid was the most frequent (47%) in the strains from Rio de Janeiro, whereas a 61 MDa plasmid predominated (14.5%) in Salvador. Combined analysis of plasmid profile and classification into lysotypes (especially those belonging to types 105 and 103, proved to be more discriminatory than the other methods applied).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Underwater acoustic networks can be quite effective to establish communication links between autonomous underwater vehicles (AUVs) and other vehicles or control units, enabling complex vehicle applications and control scenarios. A communications and control framework to support the use of underwater acoustic networks and sample application scenarios are described for single and multi-AUV operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sensitivity of the larval stages of Schistosoma mansoni to chemotherapy with praziquantel and oxamniquine was tested in mice during primary and secondary infections and after different intervals from cercarial exposure. Worm recovery by perfusion of the porto-mesenteric system, followed by counting and a morphometric study of the parasite, allowed the conclusion that the relative resistance of the larval stages of S. mansoni to schistosomicide drugs, demonstrated in primary infections, also persists when the host is already infected. This indicates that a therapeutic failure may result when an infected host is treated some time after being re-infected, because of the presence of migrating, drug-resistant, immature forms of the parasite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a mobile information system denominated as Vehicle-to-Anything Application (V2Anything App), and explains its conceptual aspects. This application is aimed at giving relevant information to Full Electric Vehicle (FEV) drivers, by supporting the integration of several sources of data in a mobile application, thus contributing to the deployment of the electric mobility process. The V2Anything App provides recommendations to the drivers about the FEV range autonomy, location of battery charging stations, information of the electricity market, and also a route planner taking into account public transportations and car or bike sharing systems. The main contributions of this application are related with the creation of an Information and Communication Technology (ICT) platform, recommender systems, data integration systems, driver profile, and personalized range prediction. Thus, it is possible to deliver relevant information to the FEV drivers related with the electric mobility process, electricity market, public transportation, and the FEV performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a smart battery charging strategy for Electric Vehicles (EVs) targeting the future smart homes. The proposed strategy consists in regulate the EV battery charging current in function of the total home current, aiming to prevent overcurrent trips in the main switch breaker. Computational and experimental results were obtained under real-time conditions to validate the proposed strategy. For such purpose was adapted a bidirectional EV battery charger prototype to operate in accordance with the aforementioned strategy. The proposed strategy was validated through experimental results obtained both in steady and transient states. The results show the correct operation of the EV battery charger even under heavy load variations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the conversion process of a traditional Internal Combustion Engine vehicle into an Electric Vehicle. The main constitutive elements of the Electric Vehicle are presented. The developed powertrain uses a three-phase inverter with Field Oriented Control and space vector modulation. The developed on-board batteries charging system can operate in Grid-to-Vehicle and Vehicle-to-Grid modes. The implemented prototypes were tested, and experimental results are presented. The assembly of these prototypes in the vehicle was made in accordance with the Portuguese legislation about vehicles conversion, and the main adopted solutions are presented.