920 resultados para Model-based optimization
Resumo:
Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.
Resumo:
BACKGROUND: Gene therapy has been recently introduced as a novel approach to treat ischemic tissues by using the angiogenic potential of certain growth factors. We investigated the effect of adenovirus-mediated gene therapy with transforming growth factor-beta (TGF-beta) delivered into the subdermal space to treat ischemically challenged epigastric skin flaps in a rat model. MATERIAL AND METHODS: A pilot study was conducted in a group of 5 animals pretreated with Ad-GFP and expression of green fluorescent protein in the skin flap sections was demonstrated under fluorescence microscopy at 2, 4, and 7 days after the treatment, indicating a successful transfection of the skin flaps following subdermal gene therapy. Next, 30 male Sprague Dawley rats were divided into 3 groups of 10 rats each. An epigastric skin flap model, based solely on the right inferior epigastric vessels, was used as the model in this study. Rats received subdermal injections of adenovirus encoding TGF-beta (Ad-TGF-beta) or green fluorescent protein (Ad-GFP) as treatment control. The third group (n = 10) received saline and served as a control group. A flap measuring 8 x 8 cm was outlined on the abdominal skin extending from the xiphoid process proximally and the pubic region distally, to the anterior axillary lines bilaterally. Just prior to flap elevation, the injections were given subdermally in the left upper corner of the flap. The flap was then sutured back to its bed. Flap viability was evaluated seven days after the initial operation. Digital images of the epigastric flaps were taken and areas of necrotic zones relative to total flap surface area were measured and expressed as percentages by using a software program. RESULTS: There was a significant increase in mean percent surviving area between the Ad-TGF-beta group and the two other control groups (P < 0.05). (Ad-TGF-beta: 90.3 +/- 4.0% versus Ad-GFP: 82.2 +/- 8.7% and saline group: 82.6 +/- 4.3%.) CONCLUSIONS: In this study, the authors were able to demonstrate that adenovirus-mediated gene therapy using TGF-beta ameliorated ischemic necrosis in an epigastric skin flap model, as confirmed by significant reduction in the necrotic zones of the flap. The results of this study raise the possibility of using adenovirus-mediated TGF-beta gene therapy to promote perfusion in random portion of skin flaps, especially in high-risk patients.
Resumo:
This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.
Resumo:
Mobile learning, in the past defined as learning with mobile devices, now refers to any type of learning-on-the-go or learning that takes advantage of mobile technologies. This new definition shifted its focus from the mobility of technology to the mobility of the learner (O'Malley and Stanton 2002; Sharples, Arnedillo-Sanchez et al. 2009). Placing emphasis on the mobile learner’s perspective requires studying “how the mobility of learners augmented by personal and public technology can contribute to the process of gaining new knowledge, skills, and experience” (Sharples, Arnedillo-Sanchez et al. 2009). The demands of an increasingly knowledge based society and the advances in mobile phone technology are combining to spur the growth of mobile learning. Around the world, mobile learning is predicted to be the future of online learning, and is slowly entering the mainstream education. However, for mobile learning to attain its full potential, it is essential to develop more advanced technologies that are tailored to the needs of this new learning environment. A research field that allows putting the development of such technologies onto a solid basis is user experience design, which addresses how to improve usability and therefore user acceptance of a system. Although there is no consensus definition of user experience, simply stated it focuses on how a person feels about using a product, system or service. It is generally agreed that user experience adds subjective attributes and social aspects to a space that has previously concerned itself mainly with ease-of-use. In addition, it can include users’ perceptions of usability and system efficiency. Recent advances in mobile and ubiquitous computing technologies further underline the importance of human-computer interaction and user experience (feelings, motivations, and values) with a system. Today, there are plenty of reports on the limitations of mobile technologies for learning (e.g., small screen size, slow connection), but there is a lack of research on user experience with mobile technologies. This dissertation will fill in this gap by a new approach in building a user experience-based mobile learning environment. The optimized user experience we suggest integrates three priorities, namely a) content, by improving the quality of delivered learning materials, b) the teaching and learning process, by enabling live and synchronous learning, and c) the learners themselves, by enabling a timely detection of their emotional state during mobile learning. In detail, the contributions of this thesis are as follows: • A video codec optimized for screencast videos which achieves an unprecedented compression rate while maintaining a very high video quality, and a novel UI layout for video lectures, which together enable truly mobile access to live lectures. • A new approach in HTTP-based multimedia delivery that exploits the characteristics of live lectures in a mobile context and enables a significantly improved user experience for mobile live lectures. • A non-invasive affective learning model based on multi-modal emotion detection with very high recognition rates, which enables real-time emotion detection and subsequent adaption of the learning environment on mobile devices. The technology resulting from the research presented in this thesis is in daily use at the School of Continuing Education of Shanghai Jiaotong University (SOCE), a blended-learning institution with 35.000 students.
Volcanic forcing for climate modeling: a new microphysics-based data set covering years 1600–present
Resumo:
As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry–climate model (CCM) simulations. This new volcanic forcing, covering the 1600–present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600–present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
This chapter proposed a personalized X-ray reconstruction-based planning and post-operative treatment evaluation framework called iJoint for advancing modern Total Hip Arthroplasty (THA). Based on a mobile X-ray image calibration phantom and a unique 2D-3D reconstruction technique, iJoint can generate patient-specific models of hip joint by non-rigidly matching statistical shape models to the X-ray radiographs. Such a reconstruction enables a true 3D planning and treatment evaluation of hip arthroplasty from just 2D X-ray radiographs whose acquisition is part of the standard diagnostic and treatment loop. As part of the system, a 3D model-based planning environment provides surgeons with hip arthroplasty related parameters such as implant type, size, position, offset and leg length equalization. With this newly developed system, we are able to provide true 3D solutions for computer assisted planning of THA using only 2D X-ray radiographs, which is not only innovative but also cost-effective.
Resumo:
Periacetabular Osteotomy (PAO) is a joint preserving surgical intervention intended to increase femoral head coverage and thereby to improve stability in young patients with hip dysplasia. Previously, we developed a CT-based, computer-assisted program for PAO diagnosis and planning, which allows for quantifying the 3D acetabular morphology with parameters such as acetabular version, inclination, lateral center edge (LCE) angle and femoral head coverage ratio (CO). In order to verify the hypothesis that our morphology-based planning strategy can improve biomechanical characteristics of dysplastic hips, we developed a 3D finite element model based on patient-specific geometry to predict cartilage contact stress change before and after morphology-based planning. Our experimental results demonstrated that the morphology-based planning strategy could reduce cartilage contact pressures and at the same time increase contact areas. In conclusion, our computer-assisted system is an efficient tool for PAO planning.
Resumo:
The events of the 1990's and early 2000's demonstrated the need for effective planning and response to natural and man-made disasters. One of those potential natural disasters is pandemic flu. Once defined, the CDC stated that program, or plan, effectiveness is improved through the process of program evaluation. (Centers for Disease Control and Prevention, 1999) Program evaluation should be accomplished not only periodically, but in the course of routine administration of the program. (Centers for Disease Control and Prevention, 1999) Accomplishing this task for a "rare, but significant event" is challenging. (Herbold, John R., PhD., 2008) To address this challenge, the RAND Corporation (under contract to the CDC) developed the "Facilitated Look-Backs" approach that was tested and validated at the state level. (Aledort et al., 2006).^ Nevertheless, no comprehensive and generally applicable pandemic influenza program evaluation tool or model is readily found for use at the local public health department level. This project developed such a model based on the "Facilitated Look-Backs" approach developed by RAND Corporation. (Aledort et al., 2006) Modifications to the RAND model included stakeholder additions, inclusion of all six CDC program evaluation steps, and suggestions for incorporating pandemic flu response plans in seasonal flu management implementation. Feedback on the model was then obtained from three LPHD's—one rural, one suburban, and one urban. These recommendations were incorporated into the final model. Feedback from the sites also supported the assumption that this model promotes the effective and efficient evaluation of both pandemic flu and seasonal flu response by reducing redundant evaluations of pandemic flu plans, seasonal flu plans, and funding requirement accountability. Site feedback also demonstrated that the model is comprehensive and flexible, so it can be adapted and applied to different LPHD needs and settings. It also stimulates evaluation of the major issues associated with pandemic flu planning. ^ The next phase in evaluating this model should be to apply it in a program evaluation of one or more LPHD's seasonal flu response that incorporates pandemic flu response plans.^
Resumo:
As the requirements for health care hospitalization have become more demanding, so has the discharge planning process become a more important part of the health services system. A thorough understanding of hospital discharge planning can, then, contribute to our understanding of the health services system. This study involved the development of a process model of discharge planning from hospitals. Model building involved the identification of factors used by discharge planners to develop aftercare plans, and the specification of the roles of these factors in the development of the discharge plan. The factors in the model were concatenated in 16 discrete decision sequences, each of which produced an aftercare plan.^ The sample for this study comprised 407 inpatients admitted to the M. D. Anderson Hospital and Tumor Institution at Houston, Texas, who were discharged to any site within Texas during a 15 day period. Allogeneic bone marrow donors were excluded from the sample. The factors considered in the development of discharge plans were recorded by discharge planners and were used to develop the model. Data analysis consisted of sorting the discharge plans using the plan development factors until for some combination and sequence of factors all patients were discharged to a single site. The arrangement of factors that led to that aftercare plan became a decision sequence in the model.^ The model constructs the same discharge plans as those developed by hospital staff for every patient in the study. Tests of the validity of the model should be extended to other patients at the MDAH, to other cancer hospitals, and to other inpatient services. Revisions of the model based on these tests should be of value in the management of discharge planning services and in the design and development of comprehensive community health services.^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^