922 resultados para LIFSHITZ POINTS
Resumo:
O objetivo desta tese é verificar se algumas soluções de matéria da teoria da relatividade geral também satisfazem as equações da teoria de Horava-Lifshitz no limite infravermelho. Para isso, partimos das soluções mais simples possíveis, tais como é o caso de um fluido de radiação nula e de poeira, conhecidas na relatividade geral, e encontramos que estas não correspondem a quaisquer soluções na teoria de gravitação de Hořava-Lifshitz para o limite de baixas energias, infravermelho, no qual esta teoria deveria se reduzir à anterior. Este resultado nos remete a novos desafios na direção de ajustes teóricos que permitam que esta teoria descreva corretamente tanto o cenário cosmológico quanto o de formação de estruturas, estáveis ou colapsadas. Para tornar o trabalho mais claro, é feita uma introdução à teoria de Hořava- Lifshitz, tema central deste trabalho, e como ela se acopla com a matéria.
Resumo:
Of fifteen processing plants surveyed in Sri Lanka, only five were found to have a prawn process which was adequately controlled. Most common process faults were: inadequate chilling of prawns after a wash in 30°C, mains water, the use of large blocks of ice to cool prawns, and high ratios of prawns to ice. There was also ample scope for cross-contamination of the processed prawns.
Resumo:
The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for a step known as sigma point placement, causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. ©2010 IEEE.
Resumo:
This paper presents the first performance evaluation of interest points on scalar volumetric data. Such data encodes 3D shape, a fundamental property of objects. The use of another such property, texture (i.e. 2D surface colouration), or appearance, for object detection, recognition and registration has been well studied; 3D shape less so. However, the increasing prevalence of depth sensors and the diminishing returns to be had from appearance alone have seen a surge in shape-based methods. In this work we investigate the performance of several detectors of interest points in volumetric data, in terms of repeatability, number and nature of interest points. Such methods form the first step in many shape-based applications. Our detailed comparison, with both quantitative and qualitative measures on synthetic and real 3D data, both point-based and volumetric, aids readers in selecting a method suitable for their application. © 2011 IEEE.
Resumo:
The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for sigma point placement, potentially causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. © 2011 Elsevier B.V.
Resumo:
Do hospitals experience safety tipping points as utilization increases, and if so, what are the implications for hospital operations management? We argue that safety tipping points occur when managerial escalation policies are exhausted and workload variability buffers are depleted. Front-line clinical staff is forced to ration resources and, at the same time, becomes more error prone as a result of elevated stress hormone levels. We confirm the existence of safety tipping points for in-hospital mortality using the discharge records of 82,280 patients across six high-mortality-risk conditions from 256 clinical departments of 83 German hospitals. Focusing on survival during the first seven days following admission, we estimate a mortality tipping point at an occupancy level of 92.5%. Among the 17% of patients in our sample who experienced occupancy above the tipping point during the first seven days of their hospital stay, high occupancy accounted for one in seven deaths. The existence of a safety tipping point has important implications for hospital management. First, flexible capacity expansion is more cost-effective for safety improvement than rigid capacity, because it will only be used when occupancy reaches the tipping point. In the context of our sample, flexible staffing saves more than 40% of the cost of a fully staffed capacity expansion, while achieving the same reduction in mortality. Second, reducing the variability of demand by pooling capacity in hospital clusters can greatly increase safety in a hospital system, because it reduces the likelihood that a patient will experience occupancy levels beyond the tipping point. Pooling the capacity of nearby hospitals in our sample reduces the number of deaths due to high occupancy by 34%.
Resumo:
We present a matching framework to find robust correspondences between image features by considering the spatial information between them. To achieve this, we define spatial constraints on the relative orientation and change in scale between pairs of features. A pairwise similarity score, which measures the similarity of features based on these spatial constraints, is considered. The pairwise similarity scores for all pairs of candidate correspondences are then accumulated in a 2-D similarity space. Robust correspondences can be found by searching for clusters in the similarity space, since actual correspondences are expected to form clusters that satisfy similar spatial constraints in this space. As it is difficult to achieve reliable and consistent estimates of scale and orientation, an additional contribution is that these parameters do not need to be determined at the interest point detection stage, which differs from conventional methods. Polar matching of dual-tree complex wavelet transform features is used, since it fits naturally into the framework with the defined spatial constraints. Our tests show that the proposed framework is capable of producing robust correspondences with higher correspondence ratios and reasonable computational efficiency, compared to other well-known algorithms. © 1992-2012 IEEE.
Resumo:
The existing machine vision-based 3D reconstruction software programs provide a promising low-cost and in some cases automatic solution for infrastructure as-built documentation. However in several steps of the reconstruction process, they only rely on detecting and matching corner-like features in multiple views of a scene. Therefore, in infrastructure scenes which include uniform materials and poorly textured surfaces, these programs fail with high probabilities due to lack of feature points. Moreover, except few programs that generate dense 3D models through significantly time-consuming algorithms, most of them only provide a sparse reconstruction which does not necessarily include required points such as corners or edges; hence these points have to be manually matched across different views that could make the process considerably laborious. To address these limitations, this paper presents a video-based as-built documentation method that automatically builds detailed 3D maps of a scene by aligning edge points between video frames. Compared to corner-like features, edge points are far more plentiful even in untextured scenes and often carry important semantic associations. The method has been tested for poorly textured infrastructure scenes and the results indicate that a combination of edge and corner-like features would allow dealing with a broader range of scenes.
Resumo:
Surface temperature measurements from two discs of a gas turbine compressor rig are used as boundary conditions for the transient conduction solution (inverse heat transfer analysis). The disc geometry is complex, and so the finite element method is used. There are often large radial temperature gradients on the discs, and the equations are therefore solved taking into account the dependence of thermal conductivity on temperature. The solution technique also makes use of a multigrid algorithm to reduce the solution time. This is particularly important since a large amount of data must be analyzed to obtain correlations of the heat transfer. The finite element grid is also used for a network analysis to calculate the radiant heat transfer in the cavity formed between the two compressor discs. The work discussed here proved particularly challenging as the disc temperatures were only measured at four different radial locations. Four methods of surface temperature interpolation are examined, together with their effect on the local heat fluxes. It is found that the choice of interpolation method depends on the available number of data points. Bessel interpolation gives the best results for four data points, whereas cubic splines are preferred when there are considerably more data points. The results from the analysis of the compressor rig data show that the heat transfer near the disc inner radius appears to be influenced by the central throughflow. However, for larger radii, the heat transfer from the discs and peripheral shroud is found to be consistent with that of a buoyancy-induced flow.