944 resultados para Almost Optimal Density Function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Overrecentdecades,remotesensinghasemergedasaneffectivetoolforimprov- ing agriculture productivity. In particular, many works have dealt with the problem of identifying characteristics or phenomena of crops and orchards on different scales using remote sensed images. Since the natural processes are scale dependent and most of them are hierarchically structured, the determination of optimal study scales is mandatory in understanding these processes and their interactions. The concept of multi-scale/multi- resolution inherent to OBIA methodologies allows the scale problem to be dealt with. But for that multi-scale and hierarchical segmentation algorithms are required. The question that remains unsolved is to determine the suitable scale segmentation that allows different objects and phenomena to be characterized in a single image. In this work, an adaptation of the Simple Linear Iterative Clustering (SLIC) algorithm to perform a multi-scale hierarchi- cal segmentation of satellite images is proposed. The selection of the optimal multi-scale segmentation for different regions of the image is carried out by evaluating the intra- variability and inter-heterogeneity of the regions obtained on each scale with respect to the parent-regions defined by the coarsest scale. To achieve this goal, an objective function, that combines weighted variance and the global Moran index, has been used. Two different kinds of experiment have been carried out, generating the number of regions on each scale through linear and dyadic approaches. This methodology has allowed, on the one hand, the detection of objects on different scales and, on the other hand, to represent them all in a sin- gle image. Altogether, the procedure provides the user with a better comprehension of the land cover, the objects on it and the phenomena occurring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visando obter subsídios para um estudo da qualidade da água em tanques de piscicultura, realizou-se um experimento de 166 dias com uma espécie nativa, pacu (Piaractus mesopotamicus). Nos tanques foram testados dois níveis diferentes de proteína na dieta (16% e 34% de proteína bruta) e três densidades de estocagem (0,25; 0,50 e 0,77 peixes/m²). Dos resultados obtidos foi observado que a interação entre a densidade de estocagem e a duração do experimento interferiram nas variáveis bicarbonato e alcalinidade e a interação entre a densidade de estocagem e a porcentagem de proteína interferiram nas concentrações de CO2 livre e total, condutividade e pH (P < 0,05). A temperatura da água nos tanques variou significativamente ao longo do período estudado (P < 0,05), diminuindo gradativamente do verão para o inverno. Não houve diferença significativa no tempo de residência da água nos tanques (P > 0,05) durante a duração do experimento. Os demais parâmetros não sofreram interferência dos tratamentos ao longo do período de estudo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wingtip vortices represent a hazard for the stability of the following airplane in airport highways. These flows have been usually modeled as swirling jets/wakes, which are known to be highly unstable and susceptible to breakdown at high Reynolds numbers for certain flow conditions, but different to the ones present in real flying airplanes. A very recent study based on Direct Numerical Simulations (DNS) shows that a large variety of helical responses can be excited and amplified when a harmonic inlet forcing is imposed. In this work, the optimal response of q-vortex (both axial vorticity and axial velocity can be modeled by a Gaussian profile) is studied by considering the time-harmonically forced problem with a certain frequency ω. We first reproduce Guo and Sun’s results for the Lamb-Oseen vortex (no axial flow) to validate our numerical code. In the axisymmetric case m = 0, the system response is the largest when the input frequency is null. The axial flow has a weak influence in the response for any axial velocity intensity. We also consider helical perturbations |m| = 1. These perturbations are excited through a resonance mechanism at moderate and large wavelengths as it is shown in Figure 1. In addition, Figure 2 shows that the frequency at which the optimal gain is obtained is not a continuous function of the axial wavenumber k. At smaller wavelengths, large response is excited by steady forcing. Regarding the axial flow, the unstable response is the largest when the axial velocity intensity, 1/q, is near to zero. For perturbations with higher azimuthal wavenumbers |m| > 1, the magnitudes of the response are smaller than those for helical modes. In order to establish an alternative validation, DNS has been carried out by using a pseudospectral Fourier formulation finding a very good agreement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis is concerned with a number of problems in Combinatorial Set Theory. The Generalized Continuum Hypothesis is assumed. Suppose X and K are non-zero cardinals. By successively identifying K with airwise disjoint sets of power K, a function/: X-*•K can be viewed as a transversal of a pairwise disjoint (X, K)family A . Questions about families of functions in K can thus bethought of as referring to families of transversals of A. We wish to consider generalizations of such questions to almost disjoint families; in particular we are interested in extensions of the following two problems: (i) What is the 'maximum' cardinality of an almost disjoint family of functions each mapping X into K? (ii) Describe the cardinalities of maximal almost disjoint families of functions each mapping X into K. Article in Bulletin of the Australian Mathematical Society 27(03):477 - 479 · June 1983  

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We measured the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating galaxies of the Calar Alto Legacy Integral Field Area Survey (CALIFA) using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early-type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled because of the low number of bins, foreground contamination, or significant interaction, we performed Markov chain Monte Carlo modelling of the velocity fields, from which we obtained the rotation curve and kinematic parameters and their realistic uncertainties. We performed an extinction correction and calculated the circular velocity v_circ accounting for the pressure support of a given galaxy. The resulting galaxy distribution on the M-r - v(circ) plane was then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that we were able to correct for the incompleteness of the sample. The 199 galaxies were weighted by volume and large-scale structure factors, which enabled us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the M_r - v_circ plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > M_r > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alterations to the supply of oxygen during early life presents a profound stressor to physiological systems with aberrant remodeling that is often long-lasting. Chronic intermittent hypoxia (CIH) is a feature of apnea of prematurity, chronic lung disease, and sleep apnea. CIH affects respiratory control but there is a dearth of information concerning the effects of CIH on respiratory muscles, including the diaphragm—the major pump muscle of breathing. We investigated the effects of exposure to gestational CIH (gCIH) and postnatal CIH (pCIH) on diaphragm muscle function in male and female rats. CIH consisted of exposure in environmental chambers to 90 s of hypoxia reaching 5% O2 at nadir, once every 5 min, 8 h a day. Exposure to gCIH started within 24 h of identification of a copulation plug and continued until day 20 of gestation; animals were studied on postnatal day 22 or 42. For pCIH, pups were born in normoxia and within 24 h of delivery were exposed with dams to CIH for 3 weeks; animals were studied on postnatal day 22 or 42. Sham groups were exposed to normoxia in parallel. Following gas exposures, diaphragm muscle contractile, and endurance properties were examined ex vivo. Neither gCIH nor pCIH exposure had effects on diaphragm muscle force-generating capacity or endurance in either sex. Similarly, early life exposure to CIH did not affect muscle tolerance of severe hypoxic stress determined ex vivo. The findings contrast with our recent observation of upper airway dilator muscle weakness following exposure to pCIH. Thus, the present study suggests a relative resilience to hypoxic stress in diaphragm muscle. Co-ordinated activity of thoracic pump and upper airway dilator muscles is required for optimal control of upper airway caliber. A mismatch in the force-generating capacity of the complementary muscle groups could have adverse consequences for the control of airway patency and respiratory homeostasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a real-time optimal control technique for non-linear plants is proposed. The control system makes use of the cell-mapping (CM) techniques, widely used for the global analysis of highly non-linear systems. The CM framework is employed for designing approximate optimal controllers via a control variable discretization. Furthermore, CM-based designs can be improved by the use of supervised feedforward artificial neural networks (ANNs), which have proved to be universal and efficient tools for function approximation, providing also very fast responses. The quantitative nature of the approximate CM solutions fits very well with ANNs characteristics. Here, we propose several control architectures which combine, in a different manner, supervised neural networks and CM control algorithms. On the one hand, different CM control laws computed for various target objectives can be employed for training a neural network, explicitly including the target information in the input vectors. This way, tracking problems, in addition to regulation ones, can be addressed in a fast and unified manner, obtaining smooth, averaged and global feedback control laws. On the other hand, adjoining CM and ANNs are also combined into a hybrid architecture to address problems where accuracy and real-time response are critical. Finally, some optimal control problems are solved with the proposed CM, neural and hybrid techniques, illustrating their good performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vapor pressure of four liquid 1H,1H-perfluoroalcohols (CF3(CF2)n(CH2)OH, n ¼ 1, 2, 3, 4), often called odd-fluorotelomer alcohols, was measured as a function of temperature between 278 K and 328 K. Liquid densities were also measured for a temperature range between 278 K and 353 K. Molar enthalpies of vaporization were calculated from the experimental data. The results are compared with data from the literature for other perfluoroalcohols as well as with the equivalent hydrogenated alcohols. The results were modeled and interpreted using molecular dynamics simulations and the GC-SAFT-VR equation of state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the large applicability of the field capacity (FC) concept in hydrology and engineering, it presents various ambiguities and inconsistencies due to a lack of methodological procedure standardization. Experimental field and laboratory protocols taken from the literature were used in this study to determine the value of FC for different depths in 29 soil profiles, totaling 209 soil samples. The volumetric water content (θ) values were also determined at three suction values (6 kPa, 10 kPa, 33 kPa), along with bulk density (BD), texture (T) and organic matter content (OM). The protocols were devised based on the water processes involved in the FC concept aiming at minimizing hydraulic inconsistencies and procedural difficulty while maintaining the practical meaning of the concept. A high correlation between FC and θ(6 kPa) allowed the development of a pedotransfer function (Equation 3) quadratic for θ(6 kPa), resulting in an accurate and nearly bias-free calculation of FC for the four database geographic areas, with a global root mean squared residue (RMSR) of 0.026 m3·m-3. At the individual soil profile scale, the maximum RMSR was only 0.040 m3·m-3. The BD, T and OM data were generally of a low predicting quality regarding FC when not accompanied by the moisture variables. As all the FC values were obtained by the same experimental protocol and as the predicting quality of Equation 3 was clearly better than that of the classical method, which considers FC equal to θ(6), θ(10) or θ(33), we recommend using Equation 3 rather than the classical method, as well as the protocol presented here, to determine in-situ FC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A stately fraction of the Universe volume is dominated by almost empty space. Alongside the luminous filamentary structures that make it up, there are vast and smooth regions that have remained outside the Cosmology spotlight during the past decades: cosmic voids. Although essentially devoid of matter, voids enclose fundamental information about the cosmological framework and have gradually become an effective and competitive cosmological probe. In this Thesis work we present fundamental results about the cosmological exploitation of voids. We focused on the number density of voids as a function of their radius, known as void size function, developing an effective pipeline for its cosmological usage. We proposed a new parametrisation of the most used theoretical void size function to model voids identified in the distribution of biased tracers (i.e. dark matter haloes, galaxies and galaxy clusters), a step of fundamental importance to extend the analysis to real data surveys. We then applied our built methodology to study voids in alternative cosmological scenarios. Firstly we exploited voids with the aim of breaking the degeneracies between cosmological scenarios characterised by modified gravity and the inclusion of massive neutrinos. Secondly we analysed voids in the perspective of the Euclid survey, focusing on the void abundance constraining power on dynamical dark energy models with massive neutrinos. Moreover we explored other void statistics like void profiles and clustering (i.e. the void-galaxy and the void-void correlation), providing cosmological forecasts for the Euclid mission. We finally focused on the probe combination, highlighting the incredible potential of the joint analysis of multiple void statistics and of the combination of the void size function with different cosmological probes. Our results show the fundamental role of the void analysis in constraining the fundamental parameters of the cosmological model and pave the way for future studies on this topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding the optimum location for placing a dam on a river is usually a complicated process which generally forces thousands of people to flee their homes because they will be inundated during the filling of the dam. Dams could also attract people living in the surrounding area after their construction. The goal of this research is to check for dam attractiveness for people by comparing growth rates of population density in surrounding areas after dam construction to those associated with the period antecedent to the dam construction. To this aim, 1859 dams across the United States of America and high-resolution population distribution from 1790 to 2010 are examined. By grouping dams as a function of their main purpose, water supply dams are found to be, as expected, the most attractive dams for people, with the biggest growth in population density. Irrigation dams are next, followed by hydroelectricity, flood control, Navigation, and finally Recreation dams. Fishery dams and dams for other uses suffered a decrease in population in the years after their construction. The regions with the greatest population growth were found approximately 40-45 km from the dam and at distances greater than 90 km, whereas the regions with the greatest population decline or only a modest gain were located within 10-15 km of the dam.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alpha oscillatory activity has long been associated with perceptual and cognitive processes related to attention control. The aim of this study is to explore the task-dependent role of alpha frequency in a lateralized visuo-spatial detection task. Specifically, the thesis focuses on consolidating the scientific literature's knowledge about the role of alpha frequency in perceptual accuracy, and deepening the understanding of what determines trial-by-trial fluctuations of alpha parameters and how these fluctuations influence overall task performance. The hypotheses, confirmed empirically, were that different implicit strategies are put in place based on the task context, in order to maximize performance with optimal resource distribution (namely alpha frequency, associated positively with performance): “Lateralization” of the attentive resources towards one hemifield should be associated with higher alpha frequency difference between contralateral and ipsilateral hemisphere; “Distribution” of the attentive resources across hemifields should be associated with lower alpha frequency difference between hemispheres; These strategies, used by the participants according to their brain capabilities, have proven themselves adaptive or maladaptive depending on the different tasks to which they have been set: "Distribution" of the attentive resources seemed to be the best strategy when the distribution probability between hemifields was balanced: i.e. the neutral condition task. "Lateralization" of the attentive resources seemed to be more effective when the distribution probability between hemifields was biased towards one hemifield: i.e., the biased condition task.