849 resultados para Building Information Modeling (BIM)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of artificial neural network (ANN) models to predict the rheological behavior of grouts is described is this paper and the sensitivity of such parameters to the variation in mixture ingredients is also evaluated. The input parameters of the neural network were the mixture ingredients influencing the rheological behavior of grouts, namely the cement content, fly ash, ground-granulated blast-furnace slag, limestone powder, silica fume, water-binder ratio (w/b), high-range water-reducing admixture, and viscosity-modifying agent (welan gum). The six outputs of the ANN models were the mini-slump, the apparent viscosity at low shear, and the yield stress and plastic viscosity values of the Bingham and modified Bingham models, respectively. The model is based on a multi-layer feed-forward neural network. The details of the proposed ANN with its architecture, training, and validation are presented in this paper. A database of 186 mixtures from eight different studies was developed to train and test the ANN model. The effectiveness of the trained ANN model is evaluated by comparing its responses with the experimental data that were used in the training process. The results show that the ANN model can accurately predict the mini-slump, the apparent viscosity at low shear, the yield stress, and the plastic viscosity values of the Bingham and modified Bingham models of the pseudo-plastic grouts used in the training process. The results can also predict these properties of new mixtures within the practical range of the input variables used in the training with an absolute error of 2%, 0.5%, 8%, 4%, 2%, and 1.6%, respectively. The sensitivity of the ANN model showed that the trend data obtained by the models were in good agreement with the actual experimental results, demonstrating the effect of mixture ingredients on fluidity and the rheological parameters with both the Bingham and modified Bingham models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and non-verbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving and producing the emotional and non-verbal behaviours required to sustain a conversational dialogue. We present the Sensitive Artificial Listener (SAL) scenario as a setting which seems particularly suited for the study of emotional and non- verbal behaviour, since it requires only very limited verbal understanding on the part of the machine. This scenario allows us to concentrate on non-verbal capabilities without having to address at the same time the challenges of spoken language understanding, task modeling etc. We first report on three prototype versions of the SAL scenario, in which the behaviour of the Sensitive Artificial Listener characters was determined by a human operator. These prototypes served the purpose of verifying the effectiveness of the SAL scenario and allowed us to collect data required for building system components for analysing and synthesising the respective behaviours. We then describe the fully autonomous integrated real-time system we created, which combines incremental analysis of user behaviour, dialogue management, and synthesis of speaker and listener behaviour of a SAL character displayed as a virtual agent. We discuss principles that should underlie the evaluation of SAL-type systems. Since the system is designed for modularity and reuse, and since it is publicly available, the SAL system has potential as a joint research tool in the affective computing research community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In small islands, a freshwater lens can develop due to the recharge induced by rain. Magnitude and spatial distribution of this recharge control the elevation of freshwater and the depth of its interface with salt water. Therefore, the study of lens morphology gives useful information on both the recharge and water uptake due to evapotranspiration by vegetation. Electrical resistivity tomography was applied on a small coral reef island, giving relevant information on the lens structure. Variable density groundwater flow models were then applied to simulate freshwater behavior. Cross validation of the geoelectrical model and the groundwater model showed that recharge exceeds water uptake in dunes with little vegetation, allowing the lens to develop. Conversely, in the low-lying and densely vegetated sectors, where water uptake exceeds recharge, the lens cannot develop and seawater intrusion occurs. This combined modeling method constitutes an original approach to evaluate effective groundwater recharge in such environments.
[Comte, J.-C., O. Banton, J.-L. Join, and G. Cabioch (2010), Evaluation of effective groundwater recharge of freshwater lens in small islands by the combined modeling of geoelectrical data and water heads, Water Resour. Res., 46, W06601, doi:10.1029/2009WR008058.]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. Principal Findings: In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate ‘stepping stone’ populations yet to be discovered. Conclusions/Significance: We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of pulsed radar for investigating the integrity of structural elements is gaining popularity and becoming firmly established as a nondestructive test method in civil engineering. Difficulties can often arise in the interpretation of results obtained, particularly where internal details are relatively complex. One approach that can be used to understand and evaluate radar results is through numerical modeling of signal propagation and reflection. By comparing the results of a numerical modeling with those from field measurements, engineers can gain valuable insight into the probable features embedded beneath the surface of a structural element. This paper discusses a series of numerical techniques for modeling subsurface radar and compares the precision of the results with those taken from real field data. It is found that more complex problems require more sophisticated analysis techniques to obtain realistic results, with a consequential increase in the computational resources to carry out the modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article analyses longitudinal case-based research exploring the attitudes and strategic responses of micro-enterprise owners in adopting information and communication technology (ICT). In so doing, it contributes to the limited literature on micro-enterprise ICT adoption, with a particular focus on sole proprietors. It provides a basis for widening the theoretical base of the literature pertaining to ICT adoption on two levels. First, a framework is developed which integrates the findings to illustrate the relationships between attitudes towards ICT adoption, endogenous and exogenous influencers of these attitudes and subsequent strategic response in ICT adoption. Second, building upon this framework the article reveals the unique challenges, opportunities and implications of ICT adoption for sole-proprietor micro-enterprises. © The Author(s) 2012

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article shows how both employers and the state have influenced macro-level processes and structures concerning the content and transposition of the European Union (EU) Employee Information and Consultation (I&C) Directive. It argues that the processes of regulation occupied by employers reinforce a voluntarism which marginalizes rather than shares decision-making power with workers. The contribution advances the conceptual lens of ‘regulatory space’ by building on Lukes’ multiple faces of power to better understand how employment regulation is determined across transnational, national and enterprise levels. The research proposes an integrated analytical framework on which ‘occupancy’ of regulatory space can be evaluated in comparative national contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In intelligent video surveillance systems, scalability (of the number of simultaneous video streams) is important. Two key factors which hinder scalability are the time spent in decompressing the input video streams, and the limited computational power of the processor. This paper demonstrates how a combination of algorithmic and hardware techniques can overcome these limitations, and significantly increase the number of simultaneous streams. The techniques used are processing in the compressed domain, and exploitation of the multicore and vector processing capability of modern processors. The paper presents a system which performs background modeling, using a Mixture of Gaussians approach. This is an important first step in the segmentation of moving targets. The paper explores the effects of reducing the number of coefficients in the compressed domain, in terms of throughput speed and quality of the background modeling. The speedups achieved by exploiting compressed domain processing, multicore and vector processing are explored individually. Experiments show that a combination of all these techniques can give a speedup of 170 times on a single CPU compared to a purely serial, spatial domain implementation, with a slight gain in quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior research has argued that use of optional properties in conceptual models results in loss of information about the semantics of the domains represented by the models. Empirical research undertaken to date supports this argument. Nevertheless, no systematic analysis has been done of whether use of optional properties is always problematic. Furthermore, prior empirical research might have deliberately or unwittingly employed models where use of optionality always causes problems. Accordingly, we examine analytically whether use of optional properties is always problematic. We employ our analytical results to inform the design of an experiment where we systematically examined the impact of optionality on users’ ability to understand domains represented by different types of conceptual models. We found evidence that use of optionality undermines users’ ability to understand the domain represented by a model but that this effect weakens when use of mandatory properties to replace optional properties leads to more-complex models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A core activity in information systems development involves understanding the
conceptual model of the domain that the information system supports. Any conceptual model is ultimately created using a conceptual-modeling (CM) grammar. Accordingly, just as high quality conceptual models facilitate high quality systems development, high quality CM grammars facilitate high quality conceptual modeling. This paper seeks to provide a new perspective on improving the quality of CM grammar semantics. For the past twenty years, the leading approach to this topic has drawn on ontological theory. However, the ontological approach captures just half of the story. It needs to be coupled with a logical approach. We show how ontological quality and logical quality interrelate and we outline three contributions of a logical approach: the ability to see familiar conceptualmodeling problems in simpler ways, the illumination of new problems, and the ability to prove the benefit of modifying CM grammars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In highly heterogeneous aquifer systems, conceptualization of regional groundwater flow models frequently results in the generalization or negligence of aquifer heterogeneities, both of which may result in erroneous model outputs. The calculation of equivalence related to hydrogeological parameters and applied to upscaling provides a means of accounting for measurement scale information but at regional scale. In this study, the Permo-Triassic Lagan Valley strategic aquifer in Northern Ireland is observed to be heterogeneous, if not discontinuous, due to subvertical trending low-permeability Tertiary dolerite dykes. Interpretation of ground and aerial magnetic surveys produces a deterministic solution to dyke locations. By measuring relative permeabilities of both the dykes and the sedimentary host rock, equivalent directional permeabilities, that determine anisotropy calculated as a function of dyke density, are obtained. This provides parameters for larger scale equivalent blocks, which can be directly imported to numerical groundwater flow models. Different conceptual models with different degrees of upscaling are numerically tested and results compared to regional flow observations. Simulation results show that the upscaled permeabilities from geophysical data allow one to properly account for the observed spatial variations of groundwater flow, without requiring artificial distribution of aquifer properties. It is also found that an intermediate degree of upscaling, between accounting for mapped field-scale dykes and accounting for one regional anisotropy value (maximum upscaling) provides results the closest to the observations at the regional scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dendritic molecules have well defined, three-dimensional branched architectures, and constitute a unique nanoscale toolkit. This review focuses on examples in which individual dendritic molecules are assembled into more complex arrays via non-covalent interactions. In particular, it illustrates how the structural information programmed into the dendritic architecture controls the assembly process, and as a consequence, the properties of the supramolecular structures which are generated. Furthermore, the review emphasises how the use of non-covalent (supramolecular) interactions, provides the assembly process with reversibility, and hence a high degree of control. The review also illustrates how self-assembly offers an ideal approach for amplifying the branching of small, synthetically accessible, relatively inexpensive dendritic systems (e.g. dendrons), into highly branched complex nanoscale assemblies.

The review begins by considering the assembly of dendritic molecules to generate discrete, well-defined supramolecular assemblies. The variety of possible assembled structures is illustrated, and the ability of an assembled structure to encapsulate a templating unit is described. The ability of both organic and inorganic building blocks to direct the assembly process is discussed. The review then describes larger discrete assemblies of dendritic molecules, which do not exist as a single well-defined species, but instead exist as statistical distributions. For example, assembly around nanoparticles, the assembly of amphiphilic dendrons and the assembly of dendritic systems in the presence of DNA will all be discussed. Finally, the review examines dendritic molecules, which assemble or order themselves into extended arrays. Such systems extend beyond the nanoscale into the microscale or even the macroscale domain, exhibiting a wide range of different architectures. The ability of these assemblies to act as gel-phase or liquid crystalline materials will be considered.

Taken as a whole, this review emphasises the control and tunability that underpins the assembly of nanomaterials using dendritic building blocks, and furthermore highlights the potential future applications of these assemblies at the interfaces between chemistry, biology and materials science. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-power processors and accelerators that were originally designed for the embedded systems market are emerging as building blocks for servers. Power capping has been actively explored as a technique to reduce the energy footprint of high-performance processors. The opportunities and limitations of power capping on the new low-power processor and accelerator ecosystem are less understood. This paper presents an efficient power capping and management infrastructure for heterogeneous SoCs based on hybrid ARM/FPGA designs. The infrastructure coordinates dynamic voltage and frequency scaling with task allocation on a customised Linux system for the Xilinx Zynq SoC. We present a compiler-assisted power model to guide voltage and frequency scaling, in conjunction with workload allocation between the ARM cores and the FPGA, under given power caps. The model achieves less than 5% estimation bias to mean power consumption. In an FFT case study, the proposed power capping schemes achieve on average 97.5% of the performance of the optimal execution and match the optimal execution in 87.5% of the cases, while always meeting power constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Age-depth modeling using Bayesian statistics requires well-informed prior information about the behavior of sediment accumulation. Here we present average sediment accumulation rates (represented as deposition times, DT, in yr/cm) for lakes in an Arctic setting, and we examine the variability across space (intra- and inter-lake) and time (late Holocene). The dataset includes over 100 radiocarbon dates, primarily on bulk sediment, from 22 sediment cores obtained from 18 lakes spanning the boreal to tundra ecotone gradients in subarctic Canada. There are four to twenty-five radiocarbon dates per core, depending on the length and character of the sediment records. Deposition times were calculated at 100-year intervals from age-depth models constructed using the ‘classical’ age-depth modeling software Clam. Lakes in boreal settings have the most rapid accumulation (mean DT 20 ± 10 years), whereas lakes in tundra settings accumulate at moderate (mean DT 70 ± 10 years) to very slow rates, (>100 yr/cm). Many of the age-depth models demonstrate fluctuations in accumulation that coincide with lake evolution and post-glacial climate change. Ten of our sediment cores yielded sediments as old as c. 9,000 cal BP (BP = years before AD 1950). From between c. 9,000 cal BP and c. 6,000 cal BP, sediment accumulation was relatively rapid (DT of 20 to 60 yr/cm). Accumulation slowed between c. 5,500 and c. 4,000 cal BP as vegetation expanded northward in response to warming. A short period of rapid accumulation occurred near 1,200 cal BP at three lakes. Our research will help inform priors in Bayesian age modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we extend the minimum-cost network flow approach to multi-target tracking, by incorporating a motion model, allowing the tracker to better cope with longterm occlusions and missed detections. In our new method, the tracking problem is solved iteratively: Firstly, an initial tracking solution is found without the help of motion information. Given this initial set of tracklets, the motion at each detection is estimated, and used to refine the tracking solution.
Finally, special edges are added to the tracking graph, allowing a further revised tracking solution to be found, where distant tracklets may be linked based on motion similarity. Our system has been tested on the PETS S2.L1 and Oxford town-center sequences, outperforming the baseline system, and achieving results comparable with the current state of the art.