979 resultados para DYNAMIC FEATURES
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.
Resumo:
Cost-efficient operation while satisfying performance and availability guarantees in Service Level Agreements (SLAs) is a challenge for Cloud Computing, as these are potentially conflicting objectives. We present a framework for SLA management based on multi-objective optimization. The framework features a forecasting model for determining the best virtual machine-to-host allocation given the need to minimize SLA violations, energy consumption and resource wasting. A comprehensive SLA management solution is proposed that uses event processing for monitoring and enables dynamic provisioning of virtual machines onto the physical infrastructure. We validated our implementation against serveral standard heuristics and were able to show that our approach is significantly better.
Resumo:
OBJECTIVE Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA), a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE) imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL), non-enhancing lesions (NEL) and normal appearing white matter (NAWM) in multiple sclerosis (MS). METHODS We investigated 18 patients with MS and clinical isolated syndrome (CIS), according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla). Tissues of interest (TOIs) were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs) were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG = 0.0005). Significant differences in the DTPA texture maps were found during inflow (52 parameters), outflow (40 parameters) and reperfusion (62 parameters). The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2-hypersignal, on a numeric scale allowing for a more subtle grading of MS-lesions.
Resumo:
In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.
Resumo:
Purpose. The purpose of this study was to investigate statistical differences with MR perfusion imaging features that reflect the dynamics of Gadolinium-uptake in MS lesions using dynamic texture parameter analysis (DTPA). Methods. We investigated 51 MS lesions (25 enhancing, 26 nonenhancing lesions) of 12 patients. Enhancing lesions () were prestratified into enhancing lesions with increased permeability (EL+; ) and enhancing lesions with subtle permeability (EL−; ). Histogram-based feature maps were computed from the raw DSC-image time series and the corresponding texture parameters were analyzed during the inflow, outflow, and reperfusion time intervals. Results. Significant differences () were found between EL+ and EL− and between EL+ and nonenhancing inactive lesions (NEL). Main effects between EL+ versus EL− and EL+ versus NEL were observed during reperfusion (mainly in mean and standard deviation (SD): EL+ versus EL− and EL+ versus NEL), while EL− and NEL differed only in their SD during outflow. Conclusion. DTPA allows grading enhancing MS lesions according to their perfusion characteristics. Texture parameters of EL− were similar to NEL, while EL+ differed significantly from EL− and NEL. Dynamic texture analysis may thus be further investigated as noninvasive endogenous marker of lesion formation and restoration.
Resumo:
In this paper we introduce technical efficiency via the intercept that evolve over time as a AR(1) process in a stochastic frontier (SF) framework in a panel data framework. Following are the distinguishing features of the model. First, the model is dynamic in nature. Second, it can separate technical inefficiency from fixed firm-specific effects which are not part of inefficiency. Third, the model allows one to estimate technical change separate from change in technical efficiency. We propose the ML method to estimate the parameters of the model. Finally, we derive expressions to calculate/predict technical inefficiency (efficiency).
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
Dynamic penetrometer data obtained with the Nimrod penetrometer (MARUM). Data is presented as (i) penetration depth (including for different layers if present), (ii) measured deceleration and (iv) estimated quasi-static bearing capacity including range of uncertainty due to the processing method. Lat/Long coordinates are given.
Resumo:
Identity is a recurrent research interest in current sociolinguistics and it is also of primary interest in digital discourse studies. Identity construction is closely related to stance and style (Eckert 2008; Jaffe 2009), which are fundamental concepts for understanding the language use and its social meanings in the case of social media users from Malaga. As the specific social meanings of a set of dialect features constitute a style, this style and the social (and technological) context in which the variants are used determine the meanings that are actually associated with each variant. Hence, every variant has its own indexical field covering any number of potential meanings. The Spanish spoken in Malaga, as Andalusian Spanish in general, was in the past often times considered an incorrect, low prestige variety of Spanish which was strongly associated with the poor, rural, backward South of Spain. This southern Spanish variety is easily recognised because of its innovative phonetic features that diverge from the national standard. In this study several of these phonetic dialect features are looked at, which users from Malaga purposefully employ (in a textualised form) on social media for identity construction. This identity construction is analysed through interactional and ethnographic methods: A perception and an imitation task served as key data and were supplemented by answers to a series of open questions. Further data stems from visual, multimodal elements (e.g. images, photos, videos) posted by users from the city of Malaga. The program TAMS Analyzer was used for data codification and analysis. Results show that certain features that in spoken language are considered rural and old-fashioned, acquire new meaning on social media, namely of urbanity and fashion. Moreover, these features, if used online, are associated with hipsters. That is, the “cool” social media index the “coolness” of the dialect features in question and, thus, the mediatisation makes their indexical fields even more multi-layered and dynamic. Social media users from Malaga performatively employ these stylised dialect features to project a hipster identity and certain related stances.
Resumo:
Culverts are very common in recent railway lines. Wild life corridors and drainage conducts often fall in this category of partially buried structures. Their dynamic behavior has received far less attention than other structures such as bridges but its large number makes that study an interesting challenge from the point of view of safety and savings. In this paper a complete study of a culvert, including on-site measurements as well as numerical modelling, will be presented. The structure belongs to the high speed railway line linking Segovia and Valladolid, in Spain. The line was opened to traffic in 2004. Its dimensions (3x3m) are the most frequent along the line. Other factors such as reduced overburden (0.6m) and an almost right angle with the track axis make it an interesting example to extract generalized conclusions. On site measurements have been performed in the structure recording the dynamic response at selected points of the structure during the passage of high speed trains at speeds ranging between 200 and 300km/h. The measurements by themselves provide a good insight into the main features of the dynamic behaviour of the structure. A 3D finite element model of the structure, representing its key features was also studied as it allows further understanding of the dynamic response to the train loads . In the paper the discrepancies between predicted and measured vibration levels will be analyzed and some advices on numerical modelling will be proposed
Resumo:
The traditional ballast track structures are still being used in high speed railways lines with success, however technical problems or performance features have led to non-ballast track solution in some cases. A considerable maintenance work is needed for ballasted tracks due to the track deterioration. Therefore it is very important to understand the mechanism of track deterioration and to predict the track settlement or track irregularity growth rate in order to reduce track maintenance costs and enable new track structures to be designed. The objective of this work is to develop the most adequate and efficient models for calculation of dynamic traffic load effects on railways track infrastructure, and then evaluate the dynamic effect on the ballast track settlement, using a ballast track settlement prediction model, which consists of the vehicle/track dynamic model previously selected and a track settlement law. The calculations are based on dynamic finite element models with direct time integration, contact between wheel and rail and interaction with railway cars. A initial irregularity profile is used in the prediction model. The track settlement law is considered to be a function of number of loading cycles and the magnitude of the loading, which represents the long-term behavior of ballast settlement. The results obtained include the track irregularity growth and the contact force in the final interaction of numerical simulation
Resumo:
Abstract interpretation-based data-flow analysis of logic programs is at this point relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Such problems relate to dealing correctly with all builtins, including meta-logical and extra-logical predicates, with dynamic predicates (where the program is modified during execution), and with the absence of certain program text during compilation. Existing proposals for dealing with such issues generally restrict in one way or another the classes of programs which can be analyzed if the information from analysis is to be used for program optimization. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially following the recently proposed ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs using the full power of the language.
Resumo:
Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behavior has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge from the viewpoint of safety and cost savings. Here, we present a complete study of a culvert, including on-site measurements and numerical modeling. The studied structure belongs to the high-speed railway line linking Segovia and Valladolid in Spain. The line was opened to traffic in 2004. On-site measurements were performed for the structure by recording the dynamic response at selected points of the structure during the passage of high-speed trains at speeds ranging between 200 and 300 km/h. The measurements provide not only reference values suitable for model fitting, but also a good insight into the main features of the dynamic behavior of this structure. Finite element techniques were used to model the dynamic behavior of the structure and its key features. Special attention is paid to vertical accelerations, the values of which should be limited to avoid track instability according to Eurocode. This study furthers our understanding of the dynamic response of railway underpasses to train loads.
Resumo:
Runtime variability is a key technique for the success of Dynamic Software Product Lines (DSPLs), as certain application demand reconfiguration of system features and execution plans at runtime. In this emerging research work we address the problem of dynamic changes in feature models in sensor networks product families, where nodes of the network demand dynamic reconfiguration at post-deployment time.