960 resultados para digital terrain model
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier–Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
An integrated flow and transport model using MIKE SHE/MIKE 11 software was developed to predict the flow and transport of mercury, Hg(II), under varying environmental conditions. The model analyzed the impact of remediation scenarios within the East Fork Poplar Creek watershed of the Oak Ridge Reservation with respect to downstream concentration of mercury. The numerical simulations included the entire hydrological cycle: flow in rivers, overland flow, groundwater flow in the saturated and unsaturated zones, and evapotranspiration and precipitation time series. Stochastic parameters and hydrologic conditions over a five year period of historical hydrological data were used to analyze the hydrological cycle and to determine the prevailing mercury transport mechanism within the watershed. Simulations of remediation scenarios revealed that reduction of the highly contaminated point sources, rather than general remediation of the contaminant plume, has a more direct impact on downstream mercury concentrations.
Resumo:
Within the Stage II program evaluation of the Miami Youth Development Project's (YDP) Changing Lives Program (CLP), this study evaluated CLP intervention effectiveness in promoting positive change in emotion-focused identity exploration (i.e. feelings of personal expressiveness; PE) and a "negative" symptom of identity development (i.e. identity distress; ID) as a first step toward the investigation of a self-transformative model of identity development in adolescent youth. Using structural equation modeling techniques, this study found that participation in the CLP is associated with positive changes in PE (path = .841, p < .002), but not changes in ID. Increase in ID scores was found to be associated with increases in PE (path = .229, p < .002), as well. Intervention effects were not moderated by age/stage, gender, or ethnicity, though differences were found in the degree to which participating subgroups (African- American/Hispanic, male/female, 14-16 years old/17-19 years old) experience change in PE and ID. Findings also suggest that moderate levels of ID may not be deleterious to identity exploration and may be associated with active exploration.
Resumo:
Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.
Resumo:
Hydrogeologic variables controlling groundwater exchange with inflow and flow-through lakes were simulated using a three-dimensional numerical model (MODFLOW) to investigate and quantify spatial patterns of lake bed seepage and hydraulic head distributions in the porous medium surrounding the lakes. Also, the total annual inflow and outflow were calculated as a percentage of lake volume for flow-through lake simulations. The general exponential decline of seepage rates with distance offshore was best demonstrated at lower anisotropy ratio (i.e., Kh/Kv = 1, 10), with increasing deviation from the exponential pattern as anisotropy was increased to 100 and 1000. 2-D vertical section models constructed for comparison with 3-D models showed that groundwater heads and seepages were higher in 3-D simulations. Addition of low conductivity lake sediments decreased seepage rates nearshore and increased seepage rates offshore in inflow lakes, and increased the area of groundwater inseepage on the beds of flow-through lakes. Introduction of heterogeneity into the medium decreased the water table and seepage ratesnearshore, and increased seepage rates offshore in inflow lakes. A laterally restricted aquifer located at the downgradient side of the flow-through lake increased the area of outseepage. Recharge rate, lake depth and lake bed slope had relatively little effect on the spatial patterns of seepage rates and groundwater exchange with lakes.
Resumo:
The purpose of this study was to assess and enhance the attitudes and knowledge of physical therapy students toward telecommunication technology. A questionnaire was given to appraise the attitudes and knowledge of 156 physical therapy students toward telecommunication technology. The intervention was a one hour presentation on applications relevant to physical therapy practice. The majority of students expressed interest in telecommunication before the presentation, and felt that expanded use of telecommunication was important to the profession. However, only a minority of students demonstrated knowledge about specific medical telecommunication applications. The post-intervention questionnaire showed the presentation to be effective in changing students' attitudes toward telecommunication, and increasing their knowledge relevant to the practice of physical therapy. If physical therapy curricula were to include exposure to telecommunication, perhaps physical therapists will be more inclined to use the technology in the future.
Resumo:
The theoretical construct of control has been defined as necessary (Etzioni, 1965), ubiquitous (Vickers, 1967), and on-going (E. Langer, 1983). Empirical measures, however, have not adequately given meaning to this potent construct, especially within complex organizations such as schools. Four stages of theory-development and empirical testing of school building managerial control using principals and teachers working within the nation's fourth largest district are presented in this dissertation as follows: (1) a review and synthesis of social science theories of control across the literatures of organizational theory, political science, sociology, psychology, and philosophy; (2) a systematic analysis of school managerial activities performed at the building level within the context of curricular and instructional tasks; (3) the development of a survey questionnaire to measure school building managerial control; and (4) initial tests of construct validity including inter-item reliability statistics, principal components analyses, and multivariate tests of significance. The social science synthesis provided support of four managerial control processes: standards, information, assessment, and incentives. The systematic analysis of school managerial activities led to further categorization between structural frequency of behaviors and discretionary qualities of behaviors across each of the control processes and the curricular and instructional tasks. Teacher survey responses (N=486) reported a significant difference between these two dimensions of control, structural frequency and discretionary qualities, for standards, information, and assessments, but not for incentives. The descriptive model of school managerial control suggests that (1) teachers perceive structural and discretionary managerial behaviors under information and incentives more clearly than activities representing standards or assessments, (2) standards are primarily structural while assessments are primarily qualitative, (3) teacher satisfaction is most closely related to the equitable distribution of incentives, (4) each of the structural managerial behaviors has a qualitative effect on teachers, and that (5) certain qualities of managerial behaviors are perceived by teachers as distinctly discretionary, apart from school structure. The variables of teacher tenure and school effectiveness reported significant effects on school managerial control processes, while instructional levels (elementary, junior, and senior) and individual school differences were not found to be significant for the construct of school managerial control.
Resumo:
The purpose of this study is to produce a model to be used by state regulating agencies to assess demand for subacute care. In accomplishing this goal, the study refines the definition of subacute care, demonstrates a method for bed need assessment, and measures the effectiveness of this new level of care. This was the largest study of subacute care to date. Research focused on 19 subacute units in 16 states, each of which provides high-intensity rehabilitative and/or restorative care carried out in a high-tech unit. Each of the facilities was based in a nursing home, but utilized separate staff, equipment, and services. Because these facilities are under local control, it was possible to study regional differences in subacute care demand. Using this data, a model for predicting demand for subacute care services was created, building on earlier models submitted by John Whitman for the American Hospital Association and Robin E. MacStravic. The Broderick model uses the "bootstrapping" method and takes advantage of high technology: computers and software, databases in business and government, publicly available databases from providers or commercial vendors, professional organizations, and other information sources. Using newly available sources of information, this new model addresses the problems and needs of health care planners as they approach the challenges of the 21st century.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
Displays intervention project that proposes changes in the organizational and operational structure of the Board of Trade of Rio Grande do Norte. Analyzes routine flow of activities of business registration and organizational structure. Compare the models, with the current proposed. The work is divided into six chapters which lists since the description of the institution, work object, until the proposed new organizational and operational model. Uses the methodology of literature review and observation of reality
Resumo:
From data collected by RV Polarstern, and additional echosoundings provided by national hydrographic offices, research institutions and the International Hydrographic Organization (IHO) Digital Bathymetric Data Center, the 1:1,000,000 Bathymetric Chart of the Weddell Sea (AWl BCWS) series has been developed. The heterogeneity of bathymetric data and the lack of observations within ice-covered areas required the incorporation of supplementary geophysical and geographical information. A semi-automatic procedure was developed for terrain modeling and contouring. In coastal regions, adjacent sub-glacial information was included in order to model the bathymetry of the transition zone along the Antarctic ice edge. Six sheets of the AWl BCWS series in the scale of 1:1,000,000 covering the southern Weddell Sea from 66°S to 78°S and from 68°W to 0°E were recently completed and included in the 1997 GEneral Bathymetric Chart of the Oceans (GEBCO) Digital Atlas CD-ROM (http://www.gebco.net). On the basis of these six 1:1,000,000 AWl BCWS sheets, a generalized 1:3,000,000-scale bathymetric chart was compiled for the entire southern Weddell Sea.
Resumo:
Aircraft manufacturing industries are looking for solutions in order to increase their productivity. One of the solutions is to apply the metrology systems during the production and assembly processes. Metrology Process Model (MPM) (Maropoulos et al, 2007) has been introduced which emphasises metrology applications with assembly planning, manufacturing processes and product designing. Measurability analysis is part of the MPM and the aim of this analysis is to check the feasibility for measuring the designed large scale components. Measurability Analysis has been integrated in order to provide an efficient matching system. Metrology database is structured by developing the Metrology Classification Model. Furthermore, the feature-based selection model is also explained. By combining two classification models, a novel approach and selection processes for integrated measurability analysis system (MAS) are introduced and such integrated MAS could provide much more meaningful matching results for the operators. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Melanoma is one of the most aggressive types of cancer. It originates from the transformation of melanocytes present in the epidermal/dermal junction of the human skin. It is commonly accepted that melanomagenesis is influenced by the interaction of environmental factors, genetic factors, as well as tumor-host interactions. DNA photoproducts induced by UV radiation are, in normal cells, repaired by the nucleotide excision repair (NER) pathway. The prominent role of NER in cancer resistance is well exemplified by patients with Xeroderma Pigmentosum (XP). This disease results from mutations in the components of the NER pathway, such as XPA and XPC proteins. In humans, NER pathway disruption leads to the development of skin cancers, including melanoma. Similar to humans afflicted with XP, Xpa and Xpc deficient mice show high sensibility to UV light, leading to skin cancer development, except melanoma. The Endothelin 3 (Edn3) signaling pathway is essential for proliferation, survival and migration of melanocyte precursor cells. Excessive production of Edn3 leads to the accumulation of large numbers of melanocytes in the mouse skin, where they are not normally found. In humans, Edn3 signaling pathway has also been implicated in melanoma progression and its metastatic potential. The goal of this study was the development of the first UV-induced melanoma mouse model dependent on the over-expression of Edn3 in the skin. The UV-induced melanoma mouse model reported here is distinguishable from all previous published models by two features: melanocytes are not transformed a priori and melanomagenesis arises only upon neonatal UV exposure. In this model, melanomagenesis depends on the presence of Edn3 in the skin. Disruption of the NER pathway due to the lack of Xpa or Xpc proteins was not essential for melanomagenesis; however, it enhanced melanoma penetrance and decreased melanoma latency after one single neonatal erythemal UV dose. Exposure to a second dose of UV at six weeks of age did not change time of appearance or penetrance of melanomas in this mouse model. Thus, a combination of neonatal UV exposure with excessive Edn3 in the tumor microenvironment is sufficient for melanomagenesis in mice; furthermore, NER deficiency exacerbates this process.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
In an overcapacity world, where the customers can choose from many similar products to satisfy their needs, enterprises are looking for new approaches and tools that can help them not only to maintain, but also to increase their competitive edge. Innovation, flexibility, quality, and service excellence are required to, at the very least, survive the on-going transition that industry is experiencing from mass production to mass customization. In order to help these enterprises, this research develops a Supply Chain Capability Maturity Model named S(CM)2. The Supply Chain Capability Maturity Model is intended to model, analyze, and improve the supply chain management operations of an enterprise. The Supply Chain Capability Maturity Model provides a clear roadmap for enterprise improvement, covering multiple views and abstraction levels of the supply chain, and provides tools to aid the firm in making improvements. The principal research tool applied is the Delphi method, which systematically gathered the knowledge and experience of eighty eight experts in Mexico. The model is validated using a case study and interviews with experts in supply chain management. The resulting contribution is a holistic model of the supply chain integrating multiple perspectives, and providing a systematic procedure for the improvement of a company’s supply chain operations.