993 resultados para net radiation estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the Cenozoic, Australia experienced major climatic shifts that have had dramatic ecological consequences for the modern biota. Mesic tropical ecosystems were progressively restricted to the coasts and replaced by arid-adapted floral and faunal communities. Whilst the role of aridification has been investigated in a wide range of terrestrial lineages, the response of freshwater clades remains poorly investigated. To gain insights into the diversification processes underlying a freshwater radiation, we studied the evolutionary history of the Australasian predaceous diving beetles of the tribe Hydroporini (147 described species). We used an integrative approach including the latest methods in phylogenetics, divergence time estimation, ancestral character state reconstruction, and likelihood-based methods of diversification rate estimation. Phylogenies and dating analyses were reconstructed with molecular data from seven genes (mitochondrial and nuclear) for 117 species (plus 12 outgroups). Robust and well-resolved phylogenies indicate a late Oligocene origin of Australasian Hydroporini. Biogeographic analyses suggest an origin in the East Coast region of Australia, and a dynamic biogeographic scenario implying dispersal events. The group successfully colonized the tropical coastal regions carved by a rampant desertification, and also colonized groundwater ecosystems in Central Australia. Diversification rate analyses suggest that the ongoing aridification of Australia initiated in the Miocene contributed to a major wave of extinctions since the late Pliocene probably attributable to an increasing aridity, range contractions and seasonally disruptions resulting from Quaternary climatic changes. When comparing subterranean and epigean genera, our results show that contrasting mechanisms drove their diversification and therefore current diversity pattern. The Australasian Hydroporini radiation reflects a combination of processes that promoted both diversification, resulting from new ecological opportunities driven by initial aridification, and a subsequent loss of mesic adapted diversity due to increasing aridity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of factor-augmented panel regressions has become very popular in recent years. Existing methods for such regressions require that the common factors are strong, such that their cumulative loadings rise proportionally to the number of cross-sectional units, which of course need not be the case in practice. Motivated by this, the current paper offers an indepth analysis of the effect of non-strong factors on two of the most popular estimators for factor-augmented regressions, namely, principal components (PC) and common correlated effects (CCE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a simple procedure for data dependent determination of the number of lags and leads to use in feasible estimation of cointegrated panel regressions. Results from Monte Carlo simulations suggests that the feasible estimators considered enjoys excellent precision in terms of root mean squared error and reasonable power with effective size hovering close to the nominal level. The good performance of the feasible estimators is verified empirically through an application to the long run money demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Local destinations have previously been shown to be associated with higher levels of both physical activity and walking, but little is known about how the distribution of destinations is related to activity. Kernel density estimation is a spatial analysis technique that accounts for the location of features relative to each other. Using kernel density estimation, this study sought to investigate whether individuals who live near destinations (shops and service facilities) that are more intensely distributed rather than dispersed: 1) have higher odds of being sufficiently active; 2) engage in more frequent walking for transport and recreation. METHODS: The sample consisted of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Destinations within these areas were geocoded and kernel density estimates of destination intensity were created using kernels of 400m (meters), 800m and 1200m. Using multilevel logistic regression, the association between destination intensity (classified in quintiles Q1(least)-Q5(most)) and likelihood of: 1) being sufficiently active (compared to insufficiently active); 2) walking≥4/week (at least 4 times per week, compared to walking less), was estimated in models that were adjusted for potential confounders. RESULTS: For all kernel distances, there was a significantly greater likelihood of walking≥4/week, among respondents living in areas of greatest destinations intensity compared to areas with least destination intensity: 400m (Q4 OR 1.41 95%CI 1.02-1.96; Q5 OR 1.49 95%CI 1.06-2.09), 800m (Q4 OR 1.55, 95%CI 1.09-2.21; Q5, OR 1.71, 95%CI 1.18-2.48) and 1200m (Q4, OR 1.7, 95%CI 1.18-2.45; Q5, OR 1.86 95%CI 1.28-2.71). There was also evidence of associations between destination intensity and sufficient physical activity, however these associations were markedly attenuated when walking was included in the models. CONCLUSIONS: This study, conducted within urban Melbourne, found that those who lived in areas of greater destination intensity walked more frequently, and showed higher odds of being sufficiently physically active-an effect that was largely explained by levels of walking. The results suggest that increasing the intensity of destinations in areas where they are more dispersed; and or planning neighborhoods with greater destination intensity, may increase residents' likelihood of being sufficiently active for health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Muscle size in the lower limb is commonly assessed in neuromuscular research as it correlates with muscle function and some approaches have been assessed for their ability to provide valid estimates of muscle volume. Work to date has not examined the ability of different measurement approaches (such as cross-sectional area (CSA) measures on magnetic resonance (MR) imaging) to accurately track changes in muscle volume as a result of an intervention, such as exercise, injury or disuse. Here we assess whether (a) the percentage change in muscle CSA in 17 lower-limb muscles during 56 days bed-rest, as assessed by five different algorithms, lies within 0.5% of the muscle volume change and (b) the variability of the outcome measure is comparable to that of muscle volume. We find that an approach selecting the MR image with the highest muscle CSA and then a series of CSA measures, the number of which depended upon the muscle considered, immediately distal and proximal, provided an acceptable estimate of the muscle volume change. In the vastii, peroneal, sartorius and anterior tibial muscle groups, accurate results can be attained by increasing the spacing between CSA measures, thus reducing the total number of MR images and hence the measurement time. In the two heads of biceps femoris, semimembranosus and gracilis, it is not possible to reduce the number of CSA measures and the entire muscle volume must be evaluated. Using these approaches one can reduce the number of CSA measures required to estimate changes in muscle volume by ~60%. These findings help to attain more efficient means to track muscle volume changes in interventional studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents new developments in common functional observers for two systems. We improve an existing common functional observer scheme by reducing its order, and then investigate its existence conditions in terms of the original system matrices. These conditions have never been explored and they enable the users to know at the outset the class of systems for which the scheme is applicable. They also show that both observers can be designed independently of each other which significantly simplifies the design process. A numerical simulation verifies the findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the channel estimation problem for multiple-input multiple-output (MIMO) relay communication systems with time-varying channels. The time-varying characteristic of the channels is described by the complex-exponential basis expansion model (CE-BEM). We propose a superimposed channel training algorithm to estimate the individual first-hop and second-hop time-varying channel matrices for MIMO relay systems. In particular, the estimation of the second-hop time-varying channel matrix is performed by exploiting the superimposed training sequence at the relay node, while the first-hop time-varying channel matrix is estimated through the source node training sequence and the estimated second-hop channel. To improve the performance of channel estimation, we derive the optimal structure of the source and relay training sequences that minimize the mean-squared error (MSE) of channel estimation. We also optimize the relay amplification factor that governs the power allocation between the source and relay training sequences. Numerical simulations demonstrate that the proposed superimposed channel training algorithm for MIMO relay systems with time-varying channels outperforms the conventional two-stage channel estimation scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2002-2012 IEEE. In this paper, we investigate the channel estimation problem for two-way multiple-input multiple-output (MIMO) relay communication systems in frequency-selective fading environments. We apply the method of superimposed channel training to estimate the individual channel state information (CSI) of the first-hop and second-hop links for two-way MIMO relay systems with frequency-selective fading channels. In this algorithm, a relay training sequence is superimposed on the received signals at the relay node to assist the estimation of the second-hop channel matrices. The optimal structure of the source and relay training sequences is derived to minimize the mean-squared error (MSE) of channel estimation. Moreover, the optimal power allocation between the source and relay training sequences is derived to improve the performance of channel estimation. Numerical examples are shown to demonstrate the performance of the proposed superimposed channel training algorithm for two-way MIMO relay systems in frequency-selective fading environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2014 IEEE. There are three different approaches for functional observer design for Linear Time-Invariant (LTI) systems within the literature. One of the most common methods has been proposed by Aldeen [1] and further developed by others. We found several examples in which the necessary and sufficient conditions for the existence of a functional observer are actually not sufficient for this methodology. This finding motivated us to develop a new methodology for designing functional observers. Our new method provides enough degrees of freedom for the observer design parameter and it improves the weakness within the Aldeen's method in solving the observer coupled matrix equations. In this paper, we present the reason and an example to show the insufficiency of the former method. Furthermore, we present our new developed methodology. An illustrative algorithm also describes the design procedure step by step. A numerical example and simulation results support our findings and performance of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding practical ways to robustly estimate abundance or density trends in threatened species is a key facet for effective conservation management. Further identifying less expensive monitoring methods that provide adequate data for robust population density estimates can facilitate increased investment into other conservation initiatives needed for species recovery. Here we evaluated and compared inference-and cost-effectiveness criteria for three field monitoring-density estimation protocols to improve conservation activities for the threatened Komodo dragon (Varanus komodoensis). We undertook line-transect counts, cage trapping and camera monitoring surveys for Komodo dragons at 11 sites within protected areas in Eastern Indonesia to collect data to estimate density using distance sampling methods or the Royle-Nichols abundance induced heterogeneity model. Distance sampling estimates were considered poor due to large confidence intervals, a high coefficient of variation and that false absences were obtained in 45 % of sites where other monitoring methods detected lizards present. The Royle-Nichols model using presence/absence data obtained from cage trapping and camera monitoring produced highly correlated density estimates, obtained similar measures of precision and recorded no false absences in data collation. However because costs associated with camera monitoring were considerably less than cage trapping methods, albeit marginally more expensive than distance sampling, better inference from this method is advocated for ongoing population monitoring of Komodo dragons. Further the cost-savings achieved by adopting this field monitoring method could facilitate increased expenditure on alternative management strategies that could help address current declines in two Komodo dragon populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need to estimate a particular quantile of a distribution is an important problem which frequently arises in many computer vision and signal processing applications. For example, our work was motivated by the requirements of many semi-automatic surveillance analytics systems which detect abnormalities in close-circuit television (CCTV) footage using statistical models of low-level motion features. In this paper we specifically address the problem of estimating the running quantile of a data stream with non-stationary stochasticity when the memory for storing observations is limited. We make several major contributions: (i) we derive an important theoretical result which shows that the change in the quantile of a stream is constrained regardless of the stochastic properties of data, (ii) we describe a set of high-level design goals for an effective estimation algorithm that emerge as a consequence of our theoretical findings, (iii) we introduce a novel algorithm which implements the aforementioned design goals by retaining a sample of data values in a manner adaptive to changes in the distribution of data and progressively narrowing down its focus in the periods of quasi-stationary stochasticity, and (iv) we present a comprehensive evaluation of the proposed algorithm and compare it with the existing methods in the literature on both synthetic data sets and three large 'real-world' streams acquired in the course of operation of an existing commercial surveillance system. Our findings convincingly demonstrate that the proposed method is highly successful and vastly outperforms the existing alternatives, especially when the target quantile is high valued and the available buffer capacity severely limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need to estimate a particular quantile of a distribution is an important problem that frequently arises in many computer vision and signal processing applications. For example, our work was motivated by the requirements of many semiautomatic surveillance analytics systems that detect abnormalities in close-circuit television footage using statistical models of low-level motion features. In this paper, we specifically address the problem of estimating the running quantile of a data stream when the memory for storing observations is limited. We make the following several major contributions: 1) we highlight the limitations of approaches previously described in the literature that make them unsuitable for nonstationary streams; 2) we describe a novel principle for the utilization of the available storage space; 3) we introduce two novel algorithms that exploit the proposed principle in different ways; and 4) we present a comprehensive evaluation and analysis of the proposed algorithms and the existing methods in the literature on both synthetic data sets and three large real-world streams acquired in the course of operation of an existing commercial surveillance system. Our findings convincingly demonstrate that both of the proposed methods are highly successful and vastly outperform the existing alternatives. We show that the better of the two algorithms (data-aligned histogram) exhibits far superior performance in comparison with the previously described methods, achieving more than 10 times lower estimate errors on real-world data, even when its available working memory is an order of magnitude smaller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technologies, such as Atomic Force Microscopy (AFM), have proven to be one of the most versatile research equipments in the field of nanotechnology by providing physical access to the materials at nanoscale. Working principles of AFM involve physical interaction with the sample at nanometre scale to estimate the topography of the sample surface. Size of the cantilever tip, within the range of few nanometres diameter, and inherent elasticity of the cantilever allow it to bend in response to the changes in the sample surface leading to accurate estimation of the sample topography. Despite the capabilities of the AFM, there is a lack of intuitive user interfaces that could allow interaction with the materials at nanoscale, analogous to the way we are accustomed to at macro level. To bridge this gap of intuitive interface design and development, a haptics interface is designed in conjunction with Bruker Nanos AFM. Interaction with the materials at nanoscale is characterised by estimating the forces experienced by the cantilever tip employing geometric deformation principles. Estimated forces are reflected to the user, in a controlled manner, through haptics interface. Established mathematical framework for force estimation can be adopted for AFM operations in air as well as in liquid mediums.