915 resultados para building information model
Resumo:
An important consideration in the development of mathematical models for dynamic simulation, is the identification of the appropriate mathematical structure. By building models with an efficient structure which is devoid of redundancy, it is possible to create simple, accurate and functional models. This leads not only to efficient simulation, but to a deeper understanding of the important dynamic relationships within the process. In this paper, a method is proposed for systematic model development for startup and shutdown simulation which is based on the identification of the essential process structure. The key tool in this analysis is the method of nonlinear perturbations for structural identification and model reduction. Starting from a detailed mathematical process description both singular and regular structural perturbations are detected. These techniques are then used to give insight into the system structure and where appropriate to eliminate superfluous model equations or reduce them to other forms. This process retains the ability to interpret the reduced order model in terms of the physico-chemical phenomena. Using this model reduction technique it is possible to attribute observable dynamics to particular unit operations within the process. This relationship then highlights the unit operations which must be accurately modelled in order to develop a robust plant model. The technique generates detailed insight into the dynamic structure of the models providing a basis for system re-design and dynamic analysis. The technique is illustrated on the modelling for an evaporator startup. Copyright (C) 1996 Elsevier Science Ltd
Resumo:
Experimental data for E. coli debris size reduction during high-pressure homogenisation at 55 MPa are presented. A mathematical model based on grinding theory is developed to describe the data. The model is based on first-order breakage and compensation conditions. It does not require any assumption of a specified distribution for debris size and can be used given information on the initial size distribution of whole cells and the disruption efficiency during homogenisation. The number of homogeniser passes is incorporated into the model and used to describe the size reduction of non-induced stationary and induced E. coil cells during homogenisation. Regressing the results to the model equations gave an excellent fit to experimental data ( > 98.7% of variance explained for both fermentations), confirming the model's potential for predicting size reduction during high-pressure homogenisation. This study provides a means to optimise both homogenisation and disc-stack centrifugation conditions for recombinant product recovery. (C) 1997 Elsevier Science Ltd.
Resumo:
The ground and excited state geometry of the six-coordinate copper(II) ion is examined in detail using the CuF64- and Cu(H2O)(6)(2+) complexes as examples. A variety of spectroscopic techniques are used to illustrate the relations between the geometric and electronic properties of these complexes through the characterization of their potential energy surfaces.
Resumo:
This study evaluated the use of Raman spectroscopy to identify the spectral differences between normal (N), benign hyperplasia (BPH) and adenocarcinoma (CaP) in fragments of prostate biopsies in vitro with the aim of developing a spectral diagnostic model for tissue classification. A dispersive Raman spectrometer was used with 830 nm wavelength and 80 mW excitation. Following Raman data collection and tissue histopathology (48 fragments diagnosed as N, 43 as BPH and 14 as CaP), two diagnostic models were developed in order to extract diagnostic information: the first using PCA and Mahalanobis analysis techniques and the second one a simplified biochemical model based on spectral features of cholesterol, collagen, smooth muscle cell and adipocyte. Spectral differences between N, BPH and CaP tissues, were observed mainly in the Raman bands associated with proteins, lipids, nucleic and amino acids. The PCA diagnostic model showed a sensitivity and specificity of 100%, which indicates the ability of PCA and Mahalanobis distance techniques to classify tissue changes in vitro. Also, it was found that the relative amount of collagen decreased while the amount of cholesterol and adipocyte increased with severity of the disease. Smooth muscle cell increased in BPH tissue. These characteristics were used for diagnostic purposes.
Resumo:
This study presents the results of Raman spectroscopy applied to the classification of arterial tissue based on a simplified model using basal morphological and biochemical information extracted from the Raman spectra of arteries. The Raman spectrograph uses an 830-nm diode laser, imaging spectrograph, and a CCD camera. A total of 111 Raman spectra from arterial fragments were used to develop the model, and those spectra were compared to the spectra of collagen, fat cells, smooth muscle cells, calcification, and cholesterol in a linear fit model. Non-atherosclerotic (NA), fatty and fibrous-fatty atherosclerotic plaques (A) and calcified (C) arteries exhibited different spectral signatures related to different morphological structures presented in each tissue type. Discriminant analysis based on Mahalanobis distance was employed to classify the tissue type with respect to the relative intensity of each compound. This model was subsequently tested prospectively in a set of 55 spectra. The simplified diagnostic model showed that cholesterol, collagen, and adipocytes were the tissue constituents that gave the best classification capability and that those changes were correlated to histopathology. The simplified model, using spectra obtained from a few tissue morphological and biochemical constituents, showed feasibility by using a small amount of variables, easily extracted from gross samples.
Resumo:
This study shows the creation of a graphical representation after the application of a questionnaire to evaluate the indicative factors of a sustainable telemedicine and telehealth center in Sao Paulo, Brazil. We categorized the factors into seven domain areas: institutional, functional, economic-financial, renewal, academic-scientific, partnerships, and social welfare, which were plotted into a graphical representation. The developed graph was shown to be useful when used in the same institution over a long period and complemented with secondary information from publications, archives, and administrative documents to support the numerical indicators. Its use may contribute toward monitoring the factors that define telemedicine and telehealth center sustainability. When systematically applied, it may also be useful for identifying the specific characteristics of the telemedicine and telehealth center, to support its organizational development.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
Epilepsy is the most common serious neurological disorder and approximately 1% of the population worldwide has epilepsy. Moreover, sudden unexpected death in epilepsy (SUDEP) is the most important direct epilepsy-related cause of death. Information concerning fisk factors for SUDEP is conflicting, but potential risk factors include: young age, early onset of epilepsy, duration of epilepsy, uncontrolled seizures, seizure frequency, AED number and winter temperatures. Additionally, the cause of SUDEP is still unknown; however, the most commonly suggested mechanisms are cardiac abnormalities during and between seizures. Similarly, sudden death syndrome (SDS) is a disease characterized by an acute death of well-nourished and seeming healthy Gallus gallus after abrupt and brief flapping of their wings and incidence of SDS these animals has recently increased worldwide. Moreover, the exactly cause of SDS in Gallus gallus is unknown, but is very probable that cardiac abnormalities play a potential role. Due the similarities between SUDEP and SDS and as Gallus gallus behavioral manifestation during SDS phenomenon is close of a tonic-clonic seizure, in this paper we suggest that epilepsy could be a new possible causal factor for SDS. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Goal-directed, coordinated movements in humans emerge from a variety of constraints that range from 'high-level' cognitive strategies based oil perception of the task to 'low-level' neuromuscular-skeletal factors such as differential contributions to coordination from flexor and extensor muscles. There has been a tendency in the literature to dichotomize these sources of constraint, favouring one or the other rather than recognizing and understanding their mutual interplay. In this experiment, subjects were required to coordinate rhythmic flexion and extension movements with an auditory metronome, the rate of which was systematically increased. When subjects started in extension on the beat of the metronome, there was a small tendency to switch to flexion at higher rates, but not vice versa. When subjects: were asked to contact a physical stop, the location of which was either coincident with or counterphase to the auditor) stimulus, two effects occurred. When haptic contact was coincident with sound, coordination was stabilized for both flexion and extension. When haptic contact was counterphase to the metronome, coordination was actually destabilized, with transitions occurring from both extension to flexion on the beat and from flexion to extension on the beat. These results reveal the complementary nature of strategic and neuromuscular factors in sensorimotor coordination. They also suggest the presence of a multimodal neural integration process-which is parametrizable by rate and context - in which intentional movement, touch and sound are bound into a single, coherent unit.
Resumo:
At the core of the analysis task in the development process is information systems requirements modelling, Modelling of requirements has been occurring for many years and the techniques used have progressed from flowcharting through data flow diagrams and entity-relationship diagrams to object-oriented schemas today. Unfortunately, researchers have been able to give little theoretical guidance only to practitioners on which techniques to use and when. In an attempt to address this situation, Wand and Weber have developed a series of models based on the ontological theory of Mario Bunge-the Bunge-Wand-Weber (BWW) models. Two particular criticisms of the models have persisted however-the understandability of the constructs in the BWW models and the difficulty in applying the models to a modelling technique. This paper addresses these issues by presenting a meta model of the BWW constructs using a meta language that is familiar to many IS professionals, more specific than plain English text, but easier to understand than the set-theoretic language of the original BWW models. Such a meta model also facilitates the application of the BWW theory to other modelling techniques that have similar meta models defined. Moreover, this approach supports the identification of patterns of constructs that might be common across meta models for modelling techniques. Such findings are useful in extending and refining the BWW theory. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Internationalisation occurs when the firm expands its selling, production, or other business activities into international markets. Many enterprises, especially small- and medium-size firms (SMEs), are internationalising today at an unprecedented rate. Managers are strategically using information to achieve degrees of internationalisation previously considered the domain of large firms. We extend existing explanations of firm internationalisation by examining the nature and fundamental, antecedent role of internalising appropriate information and translating it into relevant knowledge. Based on case studies of internationalising firms, we advance a conceptualisation of information internalisation and knowledge creation within the firm as it achieves internationalisation readiness. In the process, we offer several propositions intended to guide future research. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
The efficacy of psychological treatments emphasising a self-management approach to chronic pain has been demonstrated by substantial empirical research. Nevertheless, high drop-out and relapse rates and low or unsuccessful engagement in self-management pain rehabilitation programs have prompted the suggestion that people vary in their readiness to adopt a self-management approach to their pain. The Pain Stages of Change Questionnaire (PSOCQ) was developed to assess a patient's readiness to adopt a self-management approach to their chronic pain. Preliminary evidence has supported the PSOCQ's psychometric properties. The current study was designed to further examine the psychometric properties of the PSOCQ, including its reliability, factorial structure and predictive validity. A total of 107 patients with an average age of 36.2 years (SD = 10.63) attending a multi-disciplinary pain management program completed the PSOCQ, the Pain Self-Efficacy Questionnaire (PSEQ) and the West Haven-Yale Multidimensional Pain Inventory (WHYMPI) pre-admission and at discharge from the program. Initial data analysis found inadequate internal consistencies of the precontemplation and action scales of the PSOCQ and a high correlation (r = 0.66, P < 0.01) between the action and maintenance scales. Principal component analysis supported a two-factor structure: 'Contemplation' and 'Engagement'. Subsequent analyses revealed that the PSEQ was a better predictor of treatment outcome than the PSOCQ scales. Discussion centres upon the utility of the PSOCQ in a clinical pain setting in light of the above findings, and a need for further research. (C) 2002 International Association for the Study of Pain. Published by Elsevier Science B.V. All rights reserved.
Resumo:
The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Systeme Pour I'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-cast Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.
Resumo:
In the last 7 years, a method has been developed to analyse building energy performance using computer simulation, in Brazil. The method combines analysis of building design plans and documentation, walk-through visits, electric and thermal measurements and the use of an energy simulation tool (DOE-2.1E code), The method was used to model more than 15 office buildings (more than 200 000 m(2)), located between 12.5degrees and 27.5degrees South latitude. The paper describes the basic methodology, with data for one building and presents additional results for other six cases. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Concerns of reduced productivity and land degradation in the Mitchell grasslands of central western Queensland were addressed through a range monitoring program to interpret condition and trend. Botanical and eclaphic parameters were recorded along piosphere and grazing gradients, and across fenceline impact areas, to maximise changes resulting from grazing. The Degradation Gradient Method was used in conjunction with State and Transition Models to develop models of rangeland dynamics and condition. States were found to be ordered along a degradation gradient, indicator species developed according to rainfall trends and transitions determined from field data and available literature. Astrebla spp. abundance declined with declining range condition and increasing grazing pressure, while annual grasses and forbs increased in dominance under poor range condition. Soil erosion increased and litter decreased with decreasing range condition. An approach to quantitatively define states within a variable rainfall environment based upon a time-series ordination analysis is described. The derived model could provide the interpretive framework necessary to integrate on-ground monitoring, remote sensing and geographic information systems to trace states and transitions at the paddock scale. However, further work is needed to determine the full catalogue of states and transitions and to refine the model for application at the paddock scale.