977 resultados para cost estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Matemática e Aplicações Especialização em Actuariado, Estatística e Investigação Operacional

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO - A prevalência da obesidade não teve alterações significativas em Portugal. Uma vez que os recursos são escassos e é cada vez mais premente distribuí-los de forma racional, torna-se importante conhecer o impacto económico da obesidade para o país e perceber se os custos se alteraram. Objectivo: Actualizar, à luz de evidência mais recente, a estimativa dos custos directos com internamento hospitalar atribuíveis à obesidade, em Portugal, no ano 2008. Metodologia: Foi estimado o custo directo da obesidade, na componente internamento, a partir da metodologia custo da doença, utilizando uma abordagem baseada na prevalência. Os dados da prevalência advém do estudo epidemiológico mais recente em Portugal (14,4%). Os valores de risco relativo utilizados provêm da meta análise epidemiológica mais completa. Foi calculado, a partir destes dados, o risco atribuível populacional (RAP) de cada patologia. Através da base de dados nacional dos episódios de internamento, fez-se uma pesquisa de todos os episódios de internamento relativos às comorbilidades associadas à obesidade e aplicou-se o respectivo RAP. Com base na portaria n.º 839-A/2009 de 31 Julho atribuíramse os custos. Resultados: Os custos directos com a obesidade, na componente internamento, no ano 2008 foram de 85,9 milhões de euros, o que corresponde a 0,92% da despesa total em saúde. Os três maiores contribuintes para esta despesa são as patologias do sistema circulatório e cerebrovascular, a osteoartrite e os episódios relativos ao tratamento da obesidade em si. Conclusões: O impacto económico relativo ao internamento da obesidade diminuiu em Portugal. Este estudo surge então, como ponto de partida para estudar os custos totais com a obesidade e a efectividade das estratégias de prevenção.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Doutor em Gestão de Informação

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radio link quality estimation is essential for protocols and mechanisms such as routing, mobility management and localization, particularly for low-power wireless networks such as wireless sensor networks. Commodity Link Quality Estimators (LQEs), e.g. PRR, RNP, ETX, four-bit and RSSI, can only provide a partial characterization of links as they ignore several link properties such as channel quality and stability. In this paper, we propose F-LQE (Fuzzy Link Quality Estimator, a holistic metric that estimates link quality on the basis of four link quality properties—packet delivery, asymmetry, stability, and channel quality—that are expressed and combined using Fuzzy Logic. We demonstrate through an extensive experimental analysis that F-LQE is more reliable than existing estimators (e.g., PRR, WMEWMA, ETX, RNP, and four-bit) as it provides a finer grain link classification. It is also more stable as it has lower coefficient of variation of link estimates. Importantly, we evaluate the impact of F-LQE on the performance of tree routing, specifically the CTP (Collection Tree Protocol). For this purpose, we adapted F-LQE to build a new routing metric for CTP, which we dubbed as F-LQE/RM. Extensive experimental results obtained with state-of-the-art widely used test-beds show that F-LQE/RM improves significantly CTP routing performance over four-bit (the default LQE of CTP) and ETX (another popular LQE). F-LQE/RM improves the end-to-end packet delivery by up to 16%, reduces the number of packet retransmissions by up to 32%, reduces the Hop count by up to 4%, and improves the topology stability by up to 47%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sulfadimethoxine (SDM) is one of the drugs, often used in the aquaculture sector to prevent the spread of disease in freshwater fish aquaculture. Its spread through the soil and surface water can contribute to an increase in bacterial resistance. It is therefore important to control this product in the environment. This work proposes a simple and low-cost potentiometric device to monitor the levels of SDM in aquaculture waters, thus avoiding its unnecessary release throughout the environment. The device combines a micropipette tip with a PVC membrane selective to SDM, prepared from an appropriate cocktail, and an inner reference solution. The membrane includes 1% of a porphyrin derivative acting as ionophore and a small amount of a lipophilic cationic additive (corresponding to 0.2% in molar ratio). The composition of the inner solution was optimized with regard to the kind and/or concentration of primary ion, chelating agent and/or a specific interfering charged species, in different concentration ranges. Electrodes constructed with inner reference solutions of 1 × 10−8 mol/L SDM and 1 × 10−4 mol/L chromate ion showed the best analytical features. Near-Nernstian response was obtained with slopes of −54.1 mV/decade, an extraordinary detection limit of 7.5 ng/mL (2.4 × 10−8 mol/L) when compared with other electrodes of the same type. The reproducibility, stability and response time are good and even better than those obtained by liquid contact ISEs. Recovery values of 98.9% were obtained from the analysis of aquaculture water samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper employs the Lyapunov direct method for the stability analysis of fractional order linear systems subject to input saturation. A new stability condition based on saturation function is adopted for estimating the domain of attraction via ellipsoid approach. To further improve this estimation, the auxiliary feedback is also supported by the concept of stability region. The advantages of the proposed method are twofold: (1) it is straightforward to handle the problem both in analysis and design because of using Lyapunov method, (2) the estimation leads to less conservative results. A numerical example illustrates the feasibility of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pathogenesis of the renal lesion upon envenomation by snakebite has been related to myolysis, hemolysis, hypotension and/or direct venom nephrotoxicity caused by the venom. Both primary and continuous cell culture systems provide an in vitro alternative for quantitative evaluation of the toxicity of snake venoms. Crude Crotalus vegrandis venom was fractionated by molecular exclusion chromatography. The toxicity of C. vegrandis crude venom, hemorrhagic, and neurotoxic fractions were evaluated on mouse primary renal cells and a continuous cell line of Vero cells maintained in vitro. Cells were isolated from murine renal cortex and were grown in 96 well plates with Dulbecco's Modified Essential Medium (DMEM) and challenged with crude and venom fractions. The murine renal cortex cells exhibited epithelial morphology and the majority showed smooth muscle actin determined by immune-staining. The cytotoxicity was evaluated by the tetrazolium colorimetric method. Cell viability was less for crude venom, followed by the hemorrhagic and neurotoxic fractions with a CT50 of 4.93, 18.41 and 50.22 µg/mL, respectively. The Vero cell cultures seemed to be more sensitive with a CT50 of 2.9 and 1.4 µg/mL for crude venom and the hemorrhagic peak, respectively. The results of this study show the potential of using cell culture system to evaluate venom toxicity.