908 resultados para Capability Maturity Model for Software


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An axisymmetric, elastic pipe is filled with an incompressible fluid and is immersed in a second, coaxial rigid pipe which contains the same fluid. A pressure pulse in the outer fluid annulus deforms the elastic pipe which invokes a fluid motion in the fluid core. It is the aim of this study to investigate streaming phenomena in the core which may originate from such a fluid-structure interaction. This work presents a numerical solver for such a configuration. It was developed in the OpenFOAM software environment and is based on the Arbitrary Lagrangian Eulerian (ALE) approach for moving meshes. The solver features a monolithic integration of the one-dimensional, coupled system between the elastic structure and the outer fluid annulus into a dynamic boundary condition for the moving surface of the fluid core. Results indicate that our configuration may serve as a mechanical model of the Tullio Phenomenon (sound-induced vertigo).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the centrality of control for achieving success in outsourced software projects, past research has identified key exogenous factors that determine the choice of controls. This view of exogenously driven control choice is based on a number of assumptions; particularly, clients and vendors are seen as separate cognitive entities that combat opportunistic threats under environmental uncertainty by one-off choices or infrequent revisions of controls. In this paper we complement this perspective by acknowledging that an outsourced software project may be characterized as a collective, evolving process faced with the challenge of coping with cognitive limitations of both client and vendor through a continuous process of learning. We argue that if viewed in this way, controls are less subject of a deliberate choice but rather are subject of endogenously driven change, i.e. controls evolve in close interaction with the evolving software project. Accordingly, we suggest a complementary model of endogenous control, where controls mediate individual and collective learning processes. Our research contributes to a better understanding of the dynamics in outsourced software projects. It also spells out methodological implications that may help improve cross-section control research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every x-ray attenuation curve inherently contains all the information necessary to extract the complete energy spectrum of a beam. To date, attempts to obtain accurate spectral information from attenuation data have been inadequate.^ This investigation presents a mathematical pair model, grounded in physical reality by the Laplace Transformation, to describe the attenuation of a photon beam and the corresponding bremsstrahlung spectral distribution. In addition the Laplace model has been mathematically extended to include characteristic radiation in a physically meaningful way. A method to determine the fraction of characteristic radiation in any diagnostic x-ray beam was introduced for use with the extended model.^ This work has examined the reconstructive capability of the Laplace pair model for a photon beam range of from 50 kVp to 25 MV, using both theoretical and experimental methods.^ In the diagnostic region, excellent agreement between a wide variety of experimental spectra and those reconstructed with the Laplace model was obtained when the atomic composition of the attenuators was accurately known. The model successfully reproduced a 2 MV spectrum but demonstrated difficulty in accurately reconstructing orthovoltage and 6 MV spectra. The 25 MV spectrum was successfully reconstructed although poor agreement with the spectrum obtained by Levy was found.^ The analysis of errors, performed with diagnostic energy data, demonstrated the relative insensitivity of the model to typical experimental errors and confirmed that the model can be successfully used to theoretically derive accurate spectral information from experimental attenuation data. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

miRNAs function to regulate gene expression through post-transcriptional mechanisms to potentially regulate multiple aspects of physiology and development. Whole transcriptome analysis has been conducted on the citron kinase mutant rat, a mutant that shows decreases in brain growth and development. The resulting differences in RNA between mutant and wild-type controls can be used to identify genetic pathways that may be regulated differentially in normal compared to abnormal neurogenesis. The goal of this thesis was to verify, with quantitative reverse transcriptase polymerase chain reaction (qRT-PCR), changes in miRNA expression in Cit-k mutants and wild types. In addition to confirming miRNA expression changes, bio-informatics software TargetScan 5.1 was used to identify potential mRNA targets of the differentially expressed miRNAs. The miRNAs that were confirmed to change include: rno-miR-466c, mmu-miR-493, mmu-miR-297a, hsa-miR-765, and hsa-miR-1270. The TargetScan analysis revealed 347 potential targets which have known roles in development. A subset of these potential targets include genes involved in the Wnt signaling pathway which is known to be an important regulator of stem cell development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With continuous new improvements in brachytherapy source designs and techniques, method of 3D dosimetry for treatment dose verifications would better ensure accurate patient radiotherapy treatment. This study was aimed to first evaluate the 3D dose distributions of the low-dose rate (LDR) Amersham 6711 OncoseedTM using PRESAGE® dosimeters to establish PRESAGE® as a suitable brachytherapy dosimeter. The new AgX100 125I seed model (Theragenics Corporation) was then characterized using PRESAGE® following the TG-43 protocol. PRESAGE® dosimeters are solid, polyurethane-based, 3D dosimeters doped with radiochromic leuco dyes that produce a linear optical density response to radiation dose. For this project, the radiochromic response in PRESAGE® was captured using optical-CT scanning (632 nm) and the final 3D dose matrix was reconstructed using the MATLAB software. An Amersham 6711 seed with an air-kerma strength of approximately 9 U was used to irradiate two dosimeters to 2 Gy and 11 Gy at 1 cm to evaluate dose rates in the r=1 cm to r=5 cm region. The dosimetry parameters were compared to the values published in the updated AAPM Report No. 51 (TG-43U1). An AgX100 seed with an air-kerma strength of about 6 U was used to irradiate two dosimeters to 3.6 Gy and 12.5 Gy at 1 cm. The dosimetry parameters for the AgX100 were compared to the values measured from previous Monte-Carlo and experimental studies. In general, the measured dose rate constant, anisotropy function, and radial dose function for the Amersham 6711 showed agreements better than 5% compared to consensus values in the r=1 to r=3 cm region. The dose rates and radial dose functions measured for the AgX100 agreed with the MCNPX and TLD-measured values within 3% in the r=1 to r=3 cm region. The measured anisotropy function in PRESAGE® showed relative differences of up to 9% with the MCNPX calculated values. It was determined that post-irradiation optical density change over several days was non-linear in different dose regions, and therefore the dose values in the r=4 to r=5 cm regions had higher uncertainty due to this effect. This study demonstrated that within the radial distance of 3 cm, brachytherapy dosimetry in PRESAGE® can be accurate within 5% as long as irradiation times are within 48 hours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ETOPO1 is a 1 arc-minute global relief model of Earth's surface that integrates land topography and ocean bathymetry. It was built from numerous global and regional data sets. Data were converted to the PanMap layer format in 14 contour lines from 500 to 7000 meter in steps of 500 m. The link provides a zip-archive (1.1 MB) with *.lay files. The PanMap Mini-GIS software is published at doi:10.1594/PANGAEA.104840.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se aborda la construcción de repositorios institucionales open source con Software Greenstone. Se realiza un recorrido teórico y otro modélico desarrollando en él una aplicación práctica. El primer recorrido, que constituye el marco teórico, comprende una descripción, de: la filosofía open access (acceso abierto) y open source (código abierto) para la creación de repositorios institucionales. También abarca en líneas generales las temáticas relacionadas al protocolo OAI, el marco legal en lo que hace a la propiedad intelectual, las licencias y una aproximación a los metadatos. En el mismo recorrido se abordan aspectos teóricos de los repositorios institucionales: acepciones, beneficios, tipos, componentes intervinientes, herramientas open source para la creación de repositorios, descripción de las herramientas y finalmente, la descripción ampliada del Software Greenstone; elegido para el desarrollo modélico del repositorio institucional colocado en un demostrativo digital. El segundo recorrido, correspondiente al desarrollo modélico, incluye por un lado el modelo en sí del repositorio con el Software Greenstone; detallándose aquí uno a uno los componentes que lo conforman. Es el insumo teórico-práctico para el diseño -paso a paso- del repositorio institucional. Por otro lado, se incluye el resultado de la modelización, es decir el repositorio creado, el cual es exportado en entorno web a un soporte digital para su visibilización. El diseño del repositorio, paso a paso, constituye el núcleo sustantivo de aportes de este trabajo de tesina

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se aborda la construcción de repositorios institucionales open source con Software Greenstone. Se realiza un recorrido teórico y otro modélico desarrollando en él una aplicación práctica. El primer recorrido, que constituye el marco teórico, comprende una descripción, de: la filosofía open access (acceso abierto) y open source (código abierto) para la creación de repositorios institucionales. También abarca en líneas generales las temáticas relacionadas al protocolo OAI, el marco legal en lo que hace a la propiedad intelectual, las licencias y una aproximación a los metadatos. En el mismo recorrido se abordan aspectos teóricos de los repositorios institucionales: acepciones, beneficios, tipos, componentes intervinientes, herramientas open source para la creación de repositorios, descripción de las herramientas y finalmente, la descripción ampliada del Software Greenstone; elegido para el desarrollo modélico del repositorio institucional colocado en un demostrativo digital. El segundo recorrido, correspondiente al desarrollo modélico, incluye por un lado el modelo en sí del repositorio con el Software Greenstone; detallándose aquí uno a uno los componentes que lo conforman. Es el insumo teórico-práctico para el diseño -paso a paso- del repositorio institucional. Por otro lado, se incluye el resultado de la modelización, es decir el repositorio creado, el cual es exportado en entorno web a un soporte digital para su visibilización. El diseño del repositorio, paso a paso, constituye el núcleo sustantivo de aportes de este trabajo de tesina

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These data are provided to allow users for reproducibility of an open source tool entitled 'automated Accumulation Threshold computation and RIparian Corridor delineation (ATRIC)'

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infrastructure as a Service clouds are a flexible and fast way to obtain (virtual) resources as demand varies. Grids, on the other hand, are middleware platforms able to combine resources from different administrative domains for task execution. Clouds can be used by grids as providers of devices such as virtual machines, so they only use the resources they need. But this requires grids to be able to decide when to allocate and release those resources. Here we introduce and analyze by simulations an economic mechanism (a) to set resource prices and (b) resolve when to scale resources depending on the users’ demand. This system has a strong emphasis on fairness, so no user hinders the execution of other users’ tasks by getting too many resources. Our simulator is based on the well-known GridSim software for grid simulation, which we expand to simulate infrastructure clouds. The results show how the proposed system can successfully adapt the amount of allocated resources to the demand, while at the same time ensuring that resources are fairly shared among users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.