963 resultados para Step response analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter considers the particle swarm optimization algorithm as a system, whose dynamics is studied from the point of view of fractional calculus. In this study some initial swarm particles are randomly changed, for the system stimulation, and its response is compared with a non-perturbed reference response. The perturbation effect in the PSO evolution is observed in the perspective of the fitness time behaviour of the best particle. The dynamics is represented through the median of a sample of experiments, while adopting the Fourier analysis for describing the phenomena. The influence upon the global dynamics is also analyzed. Two main issues are reported: the PSO dynamics when the system is subjected to random perturbations, and its modelling with fractional order transfer functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: multimodality environment; requirement for greater understanding of the imaging technologies used, the limitations of these technologies, and how to best interpret the results; dose optimization; introduction of new techniques; current practice and best practice; incidental findings, in low-dose CT images obtained as part of the hybrid imaging process, are an increasing phenomenon with advancing CT technology; resultant ethical and medico-legal dilemmas; understanding limitations of these procedures important when reporting images and recommending follow-up; free-response observer performance study was used to evaluate lesion detection in low-dose CT images obtained during attenuation correction acquisitions for myocardial perfusion imaging, on two hybrid imaging systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Drilling of polymeric matrix composites structures”

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes Knowledge Management (KM) as a political activity, made by the great political leaders of the world. We try to inspect if at the macro political level KM is made, and how. The research is interesting because given that we live in a Knowledge society, in the Information Era, it is more or less obvious that the political leaders should also do KM. However we don’t know of any previous study on KM and world leaders and this paper wants to be a first step to fill that gap. As a methodology we use literature review: given this one is a first preliminary study we use data we found in the Internet and other databases like EBSCO. We divide the analysis in two main parts: theoretical ideas first, and an application second. The second part is it self divided in two segments: the past and the present times. We find that rather not surprisingly, KM always was and is pervasive in the activity of the world leaders, and has become more and more diverse has power itself became to be more and more disseminated in the world. The study has the limitation of relying on insights and texts and not on interviews. But we believe it is very interesting to make this kind of analysis and such studies may help improving the democracies in the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High speed trains, when crossing regions with abrupt changes in vertical stiffness of the track and/or subsoil, may generate excessive ground and track vibrations. There is an urgent need for specific analyses of this problem so as to allow reliable esimates of vibration amplitude. Full understanding of these phenomena will lead to new construction solutions and mitigation of undesirable features. In this paper analytical transient solutions of dynamic response of one-dimensional systems with sudden change of foundation stiffness are derived. Results are expressed in terms of vertical displacement. Sensitivity analysis of the response amplitude is also performed. The analytical expressions presented herein, to the authors’ knowledge, have not been published yet. Although related to one-dimensional cases, they can give useful insight into the problem. Nevertheless, in order to obtain realistic response, vehicle- rail interaction cannot be omitted. Results and conclusions are confirmed using general purpose commercial software ANSYS. In conclusion, this work contributes to a better understanding of the additional vibration phenomenon due to vertical stiffness variation, permitting better control of the train velocity and optimization of the track design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functionally graded composite materials can provide continuously varying properties, which distribution can vary according to a specific location within the composite. More frequently, functionally graded materials consider a through thickness variation law, which can be more or less smoother, possessing however an important characteristic which is the continuous properties variation profiles, which eliminate the abrupt stresses discontinuities found on laminated composites. This study aims to analyze the transient dynamic behavior of sandwich structures, having a metallic core and functionally graded outer layers. To this purpose, the properties of the particulate composite metal-ceramic outer layers, are estimated using Mod-Tanaka scheme and the dynamic analyses considers first order and higher order shear deformation theories implemented though kriging finite element method. The transient dynamic response of these structures is carried out through Bossak-Newmark method. The illustrative cases presented in this work, consider the influence of the shape functions interpolation domain, the properties through-thickness distribution, the influence of considering different materials, aspect ratios and boundary conditions. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ecotoxicological response of the living organisms in an aquatic system depends on the physical, chemical and bacteriological variables, as well as the interactions between them. An important challenge to scientists is to understand the interaction and behaviour of factors involved in a multidimensional process such as the ecotoxicological response.With this aim, multiple linear regression (MLR) and principal component regression were applied to the ecotoxicity bioassay response of Chlorella vulgaris and Vibrio fischeri in water collected at seven sites of Leça river during five monitoring campaigns (February, May, June, August and September of 2006). The river water characterization included the analysis of 22 physicochemical and 3 microbiological parameters. The model that best fitted the data was MLR, which shows: (i) a negative correlation with dissolved organic carbon, zinc and manganese, and a positive one with turbidity and arsenic, regarding C. vulgaris toxic response; (ii) a negative correlation with conductivity and turbidity and a positive one with phosphorus, hardness, iron, mercury, arsenic and faecal coliforms, concerning V. fischeri toxic response. This integrated assessment may allow the evaluation of the effect of future pollution abatement measures over the water quality of Leça River.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The humoral and cellular immune responses as well as the resistance to infection with bloodstream forms of T. cruzi were studied in mice immunized with acidic antigenic fractions from parasite cytosol, F III and F IV, plus Bordetella pertussis as adjuvant. The immunization with F III induced positive ITH and DTH responses to homologous antigens. In mice immunized with F IV, the ITH was negative and four out of six animals presented positive DTH reactions. In both groups of mice the analysis of IgG aginst T. cruzi showed that the major isotype elicited was IgG1. Specific IgE was also detected in sera from F III immunized mice, thus confirming the presence of homocytothropic antibodies. The parasitemias reached by F III and F IV immunized mice after challenge were lower than those of the controls showing in this way a partial protection against the acute infection. The histological studies of heart and skeletal muscle performed two months after the infection revealed variable mononuclear infiltration in all infected mice despite immunization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the predictive value of genetic polymorphisms in the context of BCG immunotherapy outcome and create a predictive profile that may allow discriminating the risk of recurrence. MATERIAL AND METHODS: In a dataset of 204 patients treated with BCG, we evaluate 42 genetic polymorphisms in 38 genes involved in the BCG mechanism of action, using Sequenom MassARRAY technology. Stepwise multivariate Cox Regression was used for data mining. RESULTS: In agreement with previous studies we observed that gender, age, tumor multiplicity and treatment scheme were associated with BCG failure. Using stepwise multivariate Cox Regression analysis we propose the first predictive profile of BCG immunotherapy outcome and a risk score based on polymorphisms in immune system molecules (SNPs in TNFA-1031T/C (rs1799964), IL2RA rs2104286 T/C, IL17A-197G/A (rs2275913), IL17RA-809A/G (rs4819554), IL18R1 rs3771171 T/C, ICAM1 K469E (rs5498), FASL-844T/C (rs763110) and TRAILR1-397T/G (rs79037040) in association with clinicopathological variables. This risk score allows the categorization of patients into risk groups: patients within the Low Risk group have a 90% chance of successful treatment, whereas patients in the High Risk group present 75% chance of recurrence after BCG treatment. CONCLUSION: We have established the first predictive score of BCG immunotherapy outcome combining clinicopathological characteristics and a panel of genetic polymorphisms. Further studies using an independent cohort are warranted. Moreover, the inclusion of other biomarkers may help to improve the proposed model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fractal geometry is used to model of a naturally fractured reservoir and the concept of fractional derivative is applied to the diffusion equation to incorporate the history of fluid flow in naturally fractured reservoirs. The resulting fractally fractional diffusion (FFD) equation is solved analytically in the Laplace space for three outer boundary conditions. The analytical solutions are used to analyze the response of a naturally fractured reservoir considering the anomalous behavior of oil production. Several synthetic examples are provided to illustrate the methodology proposed in this work and to explain the diffusion process in fractally fractured systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s healthcare paradigm, optimal sedation during anesthesia plays an important role both in patient welfare and in the socio-economic context. For the closed-loop control of general anesthesia, two drugs have proven to have stable, rapid onset times: propofol and remifentanil. These drugs are related to their effect in the bispectral index, a measure of EEG signal. In this paper wavelet time–frequency analysis is used to extract useful information from the clinical signals, since they are time-varying and mark important changes in patient’s response to drug dose. Model based predictive control algorithms are employed to regulate the depth of sedation by manipulating these two drugs. The results of identification from real data and the simulation of the closed loop control performance suggest that the proposed approach can bring an improvement of 9% in overall robustness and may be suitable for clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.