915 resultados para building information model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La recerca va analitzar la interacció entre innovació tecnològica, canvi organitzatiu i transformació dels serveis públics i els processos polítics a l'Ajuntament de Barcelona. Prenent com a hipòtesi de partida l'aparició d'un possible model Barcelona II (que entenem que és paral·lel al model Barcelona, un exemple internacionalment reconegut de combinació de polítiques urbanes), es van estudiar les transformacions internes del consistori barceloní vinculades amb l'ús innovador de les tecnologies de la informació i la comunicació i es van relacionar amb el conjunt de canvis socials i polítics que interaccionaven amb aquest procés.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents recent WMR (wheeled mobile robot) navigation experiences using local perception knowledge provided by monocular and odometer systems. A local narrow perception horizon is used to plan safety trajectories towards the objective. Therefore, monocular data are proposed as a way to obtain real time local information by building two dimensional occupancy grids through a time integration of the frames. The path planning is accomplished by using attraction potential fields, while the trajectory tracking is performed by using model predictive control techniques. The results are faced to indoor situations by using the lab available platform consisting in a differential driven mobile robot

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: An animal model has been developed to compare the effects of suture technique on the luminal dimensions and compliance of end-to-side vascular anastomoses. METHODS: Carotid and internal mammalian arteries (IMAs) were exposed in three pigs (90 kg). IMAs were sectioned distally to perform end-to-side anastomoses on carotid arteries. One anastomosis was performed with 7/0 polypropylene running suture. The other was performed with the automated suture delivery device (Perclose/Abbott Labs Inc.) that makes a 7/0 polypropylene interrupted suture. Four piezoelectric crystals were sutured on toe, heel and both lateral sides of each anastomosis to measure anastomotic axes. Anastomotic cross-sectional area (CSAA) was calculated with: CSAA = pi x mM/4 where m and M are the minor and major axes of the elliptical anastomosis. Cross-sectional anastomotic compliance (CSAC) was calculated as CSAC=Delta CSAA/Delta P where Delta P is the mean pulse pressure and Delta CSAA is the mean CSAA during cardiac cycle. RESULTS: We collected a total of 1200000 pressure-length data per animal. For running suture we had a mean systolic CSAA of 26.94+/-0.4 mm(2) and a mean CSAA in diastole of 26.30+/-0.5 mm(2) (mean Delta CSAA was 0.64 mm(2)). CSAC for running suture was 4.5 x 10(-6)m(2)/kPa. For interrupted suture we had a mean CSAA in systole of 21.98+/-0.2 mm(2) and a mean CSAA in diastole of 17.38+/-0.3 mm(2) (mean Delta CSAA was 4.6+/-0.1 mm(2)). CSAC for interrupted suture was 11 x 10(-6) m(2)/kPa. CONCLUSIONS: This model, even with some limitations, can be a reliable source of information improving the outcome of vascular anastomoses. The study demonstrates that suture technique has a substantial effect on cross-sectional anastomotic compliance of end-to-side anastomoses. Interrupted suture may maximise the anastomotic lumen and provides a considerably higher CSAC than continuous suture, that reduces flow turbulence, shear stress and intimal hyperplasia. The Heartflo anastomosis device is a reliable instrument that facilitates performance of interrupted suture anastomoses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyse in a unified way how the presence of a trader with privilege information makes the market to be efficient when the release time is known. We establish a general relation between the problem of finding an equilibrium and the problem of enlargement of filtrations. We also consider the case where the time of announcement is random. In such a case the market is not fully efficient and there exists equilibrium if the sensitivity of prices with respect to the global demand is time decreasing according with the distribution of the random time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-lapse geophysical data acquired during transient hydrological experiments are being increasingly employed to estimate subsurface hydraulic properties at the field scale. In particular, crosshole ground-penetrating radar (GPR) data, collected while water infiltrates into the subsurface either by natural or artificial means, have been demonstrated in a number of studies to contain valuable information concerning the hydraulic properties of the unsaturated zone. Previous work in this domain has considered a variety of infiltration conditions and different amounts of time-lapse GPR data in the estimation procedure. However, the particular benefits and drawbacks of these different strategies as well as the impact of a variety of key and common assumptions remain unclear. Using a Bayesian Markov-chain-Monte-Carlo stochastic inversion methodology, we examine in this paper the information content of time-lapse zero-offset-profile (ZOP) GPR traveltime data, collected under three different infiltration conditions, for the estimation of van Genuchten-Mualem (VGM) parameters in a layered subsurface medium. Specifically, we systematically analyze synthetic and field GPR data acquired under natural loading and two rates of forced infiltration, and we consider the value of incorporating different amounts of time-lapse measurements into the estimation procedure. Our results confirm that, for all infiltration scenarios considered, the ZOP GPR traveltime data contain important information about subsurface hydraulic properties as a function of depth, with forced infiltration offering the greatest potential for VGM parameter refinement because of the higher stressing of the hydrological system. Considering greater amounts of time-lapse data in the inversion procedure is also found to help refine VGM parameter estimates. Quite importantly, however, inconsistencies observed in the field results point to the strong possibility that posterior uncertainties are being influenced by model structural errors, which in turn underlines the fundamental importance of a systematic analysis of such errors in future related studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Des de l’any 2000 es té constància de la presencia del llop a Catalunya. Des de llavors, com a mínim 14 llops diferents han entrat i sortit del territori català, encara que cap d’ells s’ha assentat de manera permanent. L’estudi analitza l’entorn català utilitzant GIS, creant un model d’adequació de l’hàbitat tenint en compte les següents variables: la distància a la carretera més propera, la biomassa disponible a la zona, l’altitud i el tipus i tant per cent de recobriment. El model es basa en la informació obtinguda mitjançant la consulta a experts tant del llop com del territori català, així com en una recerca bibliogràfica sobre l’adequació de l’hàbitat del llop. L’enquesta que es dirigí als experts té en compte els valors que cada variable pot prendre dins l’àrea d’estudi, estableix rangs dels valors de cada variable i pregunta als experts com cada rang pot afectar a l’adequació de l’hàbitat pel llop. Els resultats mostren com bona part de la zona Nord de Catalunya té unes condicions adequades perquè el llop pugui arribar a reproduir-s’hi. Es desenvolupa també una anàlisi dels possibles punts de conflicte humà-llop i una superposició dels espais protegits amb les zones més adequades per l’establiment del llop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In medical imaging, merging automated segmentations obtained from multiple atlases has become a standard practice for improving the accuracy. In this letter, we propose two new fusion methods: "Global Weighted Shape-Based Averaging" (GWSBA) and "Local Weighted Shape-Based Averaging" (LWSBA). These methods extend the well known Shape-Based Averaging (SBA) by additionally incorporating the similarity information between the reference (i.e., atlas) images and the target image to be segmented. We also propose a new spatially-varying similarity-weighted neighborhood prior model, and an edge-preserving smoothness term that can be used with many of the existing fusion methods. We first present our new Markov Random Field (MRF) based fusion framework that models the above mentioned information. The proposed methods are evaluated in the context of segmentation of lymph nodes in the head and neck 3D CT images, and they resulted in more accurate segmentations compared to the existing SBA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY This paper analyses the outcomes of the EEA and bilateral agreements vote at the level of the 3025 communities of the Swiss Confederation by simultaneously modelling the vote and the participation decisions. Regressions include economic and political factors. The economic variables are the aggregated shares of people employed in the losing, Winning and neutral sectors, according to BRUNETTI, JAGGI and WEDER (1998) classification, Which follows a Ricardo-Viner logic, and the average education levels, which follows a Heckscher-Ohlin approach. The political factors are those used in the recent literature. The results are extremely precise and consistent. Most of the variables have the predicted sign and are significant at the l % level. More than 80 % of the communities' vote variance is explained by the model, substantially reducing the residuals when compared to former studies. The political variables do have the expected signs and are significant as Well. Our results underline the importance of the interaction between electoral choice and participation decisions as well as the importance of simultaneously dealing with those issues. Eventually they reveal the electorate's high level of information and rationality. ZUSAMMENFASSUNG Unser Beitrag analysiert in einem Model, welches gleichzeitig die Stimm- ("ja" oder "nein") und Partizipationsentscheidung einbezieht, den Ausgang der Abstimmungen über den Beitritt zum EWR und über die bilateralen Verträge für die 3025 Gemeinden der Schweiz. Die Regressionsgleichungen beinhalten ökonomische und politische Variabeln. Die ökonomischen Variabeln beinhalten die Anteile an sektoriellen Arbeitsplatzen, die, wie in BRUNETTI, JAGGIl.1I1d WEDER (1998), in Gewinner, Verlierer und Neutrale aufgeteilt Wurden, gemäß dem Model von Ricardo-Viner, und das durchschnittliche Ausbildungsniveau, gemäß dem Model von Heckscher-Ohlin. Die politischen Variabeln sind die in der gegenwärtigen Literatur üblichen. Unsere Resultate sind bemerkenswert präzise und kohärent. Die meisten Variabeln haben das von der Theorie vorausgesagte Vorzeichen und sind hoch signifikant (l%). Mehr als 80% der Varianz der Stimmabgabe in den Gemeinden wird durch das Modell erklärt, was, im Vergleich mit früheren Arbeiten, die unerklärten Residuen Wesentlich verkleinert. Die politischen Variabeln haben auch die erwarteten Vorzeichen und sind signifikant. Unsere Resultate unterstreichen die Bedeutung der Interaktion zwischen der Stimm- und der Partizipationsentscheidung, und die Bedeutung diese gleichzeitig zu behandeln. Letztendlich, belegen sie den hohen lnformationsgrad und die hohe Rationalität der Stimmbürger. RESUME Le présent article analyse les résultats des votations sur l'EEE et sur les accords bilatéraux au niveau des 3025 communes de la Confédération en modélisant simultanément les décisions de vote ("oui" ou "non") et de participation. Les régressions incluent des déterminants économiques et politiques. Les déterminants économiques sont les parts d'emploi sectoriels agrégées en perdants, gagnants et neutres selon la classification de BRUNETTI, JAGGI ET WEDER (1998), suivant la logique du modèle Ricardo-Viner, et les niveaux de diplômes moyens, suivant celle du modèle Heckscher-Ohlin. Les déterminants politiques suivent de près ceux utilisés dans la littérature récente. Les résultats sont remarquablement précis et cohérents. La plupart des variables ont les signes prédits par les modèles et sont significatives a 1%. Plus de 80% de la variance du vote par commune sont expliqués par le modèle, faisant substantiellement reculer la part résiduelle par rapport aux travaux précédents. Les variables politiques ont aussi les signes attendus et sont aussi significatives. Nos résultats soulignent l'importance de l'interaction entre choix électoraux et décisions de participation et l'importance de les traiter simultanément. Enfin, ils mettent en lumière les niveaux élevés d'information et de rationalité de l'électorat.