963 resultados para atk-ohjelmat - LSP - Library software package
Resumo:
In the laboratory of Dr. Dieter Jaeger at Emory University, we use computer simulations to study how the biophysical properties of neurons—including their three-dimensional structure, passive membrane resistance and capacitance, and active membrane conductances generated by ion channels—affect the way that the neurons transfer synaptic inputs into the action potential streams that represent their output. Because our ultimate goal is to understand how neurons process and relay information in a living animal, we try to make our computer simulations as realistic as possible. As such, the computer models reflect the detailed morphology and all of the ion channels known to exist in the particular neuron types being simulated, and the model neurons are tested with synaptic input patterns that are intended to approximate the inputs that real neurons receive in vivo. The purpose of this workshop tutorial was to explain what we mean by ‘in vivo-like’ synaptic input patterns, and how we introduce these input patterns into our computer simulations using the freely available GENESIS software package (http://www.genesis-sim.org/GENESIS). The presentation was divided into four sections: first, an explanation of what we are talking about when we refer to in vivo-like synaptic input patterns
Resumo:
PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.
Resumo:
HYPOTHESIS A previously developed image-guided robot system can safely drill a tunnel from the lateral mastoid surface, through the facial recess, to the middle ear, as a viable alternative to conventional mastoidectomy for cochlear electrode insertion. BACKGROUND Direct cochlear access (DCA) provides a minimally invasive tunnel from the lateral surface of the mastoid through the facial recess to the middle ear for cochlear electrode insertion. A safe and effective tunnel drilled through the narrow facial recess requires a highly accurate image-guided surgical system. Previous attempts have relied on patient-specific templates and robotic systems to guide drilling tools. In this study, we report on improvements made to an image-guided surgical robot system developed specifically for this purpose and the resulting accuracy achieved in vitro. MATERIALS AND METHODS The proposed image-guided robotic DCA procedure was carried out bilaterally on 4 whole head cadaver specimens. Specimens were implanted with titanium fiducial markers and imaged with cone-beam CT. A preoperative plan was created using a custom software package wherein relevant anatomical structures of the facial recess were segmented, and a drill trajectory targeting the round window was defined. Patient-to-image registration was performed with the custom robot system to reference the preoperative plan, and the DCA tunnel was drilled in 3 stages with progressively longer drill bits. The position of the drilled tunnel was defined as a line fitted to a point cloud of the segmented tunnel using principle component analysis (PCA function in MatLab). The accuracy of the DCA was then assessed by coregistering preoperative and postoperative image data and measuring the deviation of the drilled tunnel from the plan. The final step of electrode insertion was also performed through the DCA tunnel after manual removal of the promontory through the external auditory canal. RESULTS Drilling error was defined as the lateral deviation of the tool in the plane perpendicular to the drill axis (excluding depth error). Errors of 0.08 ± 0.05 mm and 0.15 ± 0.08 mm were measured on the lateral mastoid surface and at the target on the round window, respectively (n =8). Full electrode insertion was possible for 7 cases. In 1 case, the electrode was partially inserted with 1 contact pair external to the cochlea. CONCLUSION The purpose-built robot system was able to perform a safe and reliable DCA for cochlear implantation. The workflow implemented in this study mimics the envisioned clinical procedure showing the feasibility of future clinical implementation.
Resumo:
The intention of an authentication and authorization infrastructure (AAI) is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO) across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP). Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.
Resumo:
Stata is a general purpose software package that has become popular among various disciplines such as epidemiology, economics, or social sciences. Users like Stata for its scientific approach, its robustness and reliability, and the ease with which its functionality can be extended by user written programs. In this talk I will first give a brief overview of the functionality of Stata and then discuss two specific features: survey estimation and predictive margins/marginal effects. Most surveys are based on complex samples that contain multiple sampling stages, are stratified or clustered, and feature unequal selection probabilities. Standard estimators can produce misleading results in such samples unless the peculiarities of the sampling plan are taken into account. Stata offers survey statistics for complex samples for a wide variety of estimators and supports several variance estimation procedures such as linearization, jackknife, and balanced repeated replication (see Kreuter and Valliant, 2007, Stata Journal 7: 1-21). In the talk I will illustrate these features using applied examples and I will also show how user written commands can be adapted to support complex samples. Complex can also be the models we fit to our data, making it difficult to interpret them, especially in case of nonlinear or non-additive models (Mood, 2010, European Sociological Review 26: 67-82). Stata provides a number of highly useful commands to make results of such models accessible by computing and displaying predictive margins and marginal effects. In my talk I will discuss these commands provide various examples demonstrating their use.
Resumo:
Context. To date, calculations of planet formation have mainly focused on dynamics, and only a few have considered the chemical composition of refractory elements and compounds in the planetary bodies. While many studies have been concentrating on the chemical composition of volatile compounds (such as H2O, CO, CO2) incorporated in planets, only a few have considered the refractory materials as well, although they are of great importance for the formation of rocky planets. Aims. We computed the abundance of refractory elements in planetary bodies formed in stellar systems with a solar chemical composition by combining models of chemical composition and planet formation. We also considered the formation of refractory organic compounds, which have been ignored in previous studies on this topic. Methods. We used the commercial software package HSC Chemistry to compute the condensation sequence and chemical composition of refractory minerals incorporated into planets. The problem of refractory organic material is approached with two distinct model calculations: the first considers that the fraction of atoms used in the formation of organic compounds is removed from the system (i.e., organic compounds are formed in the gas phase and are non-reactive); and the second assumes that organic compounds are formed by the reaction between different compounds that had previously condensed from the gas phase. Results. Results show that refractory material represents more than 50 wt % of the mass of solids accreted by the simulated planets with up to 30 wt % of the total mass composed of refractory organic compounds. Carbide and silicate abundances are consistent with C/O and Mg/Si elemental ratios of 0.5 and 1.02 for the Sun. Less than 1 wt % of carbides are present in the planets, and pyroxene and olivine are formed in similar quantities. The model predicts planets that are similar in composition to those of the solar system. Starting from a common initial nebula composition, it also shows that a wide variety of chemically different planets can form, which means that the differences in planetary compositions are due to differences in the planetary formation process. Conclusions. We show that a model in which refractory organic material is absent from the system is more compatible with observations. The use of a planet formation model is essential to form a wide diversity of planets in a consistent way.
Resumo:
PURPOSE Lymphangioleiomyomatosis (LAM) is characterized by proliferation of smooth muscle tissue that causes bronchial obstruction and secondary cystic destruction of lung parenchyma. The aim of this study was to evaluate the typical distribution of cystic defects in LAM with quantitative volumetric chest computed tomography (CT). MATERIALS AND METHODS CT examinations of 20 patients with confirmed LAM were evaluated with region-based quantification of lung parenchyma. Additionally, 10 consecutive patients were identified who had recently undergone CT imaging of the lung at our institution, in which no pathologies of the lung were found, to serve as a control group. Each lung was divided into three regions (upper, middle and lower thirds) with identical number of slices. In addition, we defined a "peel" and "core" of the lung comprising the 2 cm subpleural space and the remaining inner lung area. Computerized detection of lung volume and relative emphysema was performed with the PULMO 3D software (v3.42, Fraunhofer MEVIS, Bremen, Germany). This software package enables the quantification of emphysematous lung parenchyma by calculating the pixel index, which is defined as the ratio of lung voxels with a density <-950HU to the total number of voxels in the lung. RESULTS Cystic changes accounted for 0.1-39.1% of the total lung volume in patients with LAM. Disease manifestation in the central lung was significantly higher than in peripheral areas (peel median: 15.1%, core median: 20.5%; p=0.001). Lower thirds of lung parenchyma showed significantly less cystic changes than upper and middle lung areas combined (lower third: median 13.4, upper and middle thirds: median 19.0, p=0.001). CONCLUSION The distribution of cystic lesions in LAM is significantly more pronounced in the central lung compared to peripheral areas. There is a significant predominance of cystic changes in apical and intermediate lung zones compared to the lung bases.
Resumo:
Femoroacetabular impingement (FAI) is a dynamic conflict of the hip defined by a pathological, early abutment of the proximal femur onto the acetabulum or pelvis. In the past two decades, FAI has received increasing focus in both research and clinical practice as a cause of hip pain and prearthrotic deformity. Anatomical abnormalities such as an aspherical femoral head (cam-type FAI), a focal or general overgrowth of the acetabulum (pincer-type FAI), a high riding greater or lesser trochanter (extra-articular FAI), or abnormal torsion of the femur have been identified as underlying pathomorphologies. Open and arthroscopic treatment options are available to correct the deformity and to allow impingement-free range of motion. In routine practice, diagnosis and treatment planning of FAI is based on clinical examination and conventional imaging modalities such as standard radiography, magnetic resonance arthrography (MRA), and computed tomography (CT). Modern software tools allow three-dimensional analysis of the hip joint by extracting pelvic landmarks from two-dimensional antero-posterior pelvic radiographs. An object-oriented cross-platform program (Hip2Norm) has been developed and validated to standardize pelvic rotation and tilt on conventional AP pelvis radiographs. It has been shown that Hip2Norm is an accurate, consistent, reliable and reproducible tool for the correction of selected hip parameters on conventional radiographs. In contrast to conventional imaging modalities, which provide only static visualization, novel computer assisted tools have been developed to allow the dynamic analysis of FAI pathomechanics. In this context, a validated, CT-based software package (HipMotion) has been introduced. HipMotion is based on polygonal three-dimensional models of the patient’s pelvis and femur. The software includes simulation methods for range of motion, collision detection and accurate mapping of impingement areas. A preoperative treatment plan can be created by performing a virtual resection of any mapped impingement zones both on the femoral head-neck junction, as well as the acetabular rim using the same three-dimensional models. The following book chapter provides a summarized description of current computer-assisted tools for the diagnosis and treatment planning of FAI highlighting the possibility for both static and dynamic evaluation, reliability and reproducibility, and its applicability to routine clinical use.
Resumo:
Electricity markets in the United States presently employ an auction mechanism to determine the dispatch of power generation units. In this market design, generators submit bid prices to a regulation agency for review, and the regulator conducts an auction selection in such a way that satisfies electricity demand. Most regulators currently use an auction selection method that minimizes total offer costs ["bid cost minimization" (BCM)] to determine electric dispatch. However, recent literature has shown that this method may not minimize consumer payments, and it has been shown that an alternative selection method that directly minimizes total consumer payments ["payment cost minimization" (PCM)] may benefit social welfare in the long term. The objective of this project is to further investigate the long term benefit of PCM implementation and determine whether it can provide lower costs to consumers. The two auction selection methods are expressed as linear constraint programs and are implemented in an optimization software package. Methodology for game theoretic bidding simulation is developed using EMCAS, a real-time market simulator. Results of a 30-day simulation showed that PCM reduced energy costs for consumers by 12%. However, this result will be cross-checked in the future with two other methods of bid simulation as proposed in this paper.
Resumo:
The core descriptions (chapter 7) summarize the most important results of the analysis of each sediment core following procedures applied during ODP/IODP expeditions. All cores were opened, described, and color-scanned. In the core descriptions the first column displays the lithological data that are based on visual analysis of the core and are supplemented by information from binocular and smear slide analyses. The sediment classification largely follows ODP/IODP convention. Lithological names consist of a principal name based on composition, degree of lithification, and/or texture as determined from visual description and microscopic observations. In the structure column the intensity of bioturbation together with individual or special features (turbidites, volcanic ash layers, plant debris, shell fragments, etc.) is shown. The hue and chroma attributes of color were determined by comparison with the Munsell soil color charts and are given in the color column in the Munsell notation. A GretagMacbethTM Spectrolino spectrophotometer was used to measure percent reflectance values of sediment color at 36 wavelength channels over the visible light range (380-730 nm) on all of the cores. The digital reflectance data of the spectrophotometer readings were routinely obtained from the surface (measured in 1 cm steps) of the split cores (archive half). The Spectrolino is equipped with a measuring aperture with folding mechanism allowing an exact positioning on the split core and is connected to a portable computer. The data are directly displayed within the software package Excel and can be controlled simultaneously. From all the color measurements, for each core the red/blue ratio (700 nm/450 nm) and the lightness are shown together with the visual core description. The reflectance of individual wavelengths is often significantly affected by the presence of minor amounts of oxyhydroxides or sulphides. To eliminate these effects, we used the red/blue ratio and lightness.
Resumo:
Sr isotope analyses have been conducted on anhydrite samples from the TAG (Trans-Atlantic Geotraverse) active hydrothermal mound (26°08?N, Mid-Atlantic Ridge) that have previously been shown to exhibit two distinct patterns of REE behavior when normalized to TAG end-member hydrothermal fluid. Despite differences in REE patterns, the Sr isotope data indicate that all the anhydrites precipitated from fluids with a similar range of hydrothermal fluid and seawater components, and all but one were seawater-dominated (52%-75%). Speciation calculations using the EQ3/6 software package for geochemical modeling of aqueous systems suggest that the REE complexation behavior in different fluid mixing scenarios can explain the variations in the REE patterns. Anhydrites that exhibit relatively flat REE patterns [(La_bs)/(Yb_bs) = 0.8-2.0; subscript bs indicates normalization to end-member black smoker hydrothermal fluid] and a small or no Eu anomaly [(Eu_bs)/(Eu*_bs) = 0.8-2.0] are inferred to have precipitated from mixes of end-member hydrothermal fluid and cold seawater. REE complexes with hard ligands (e.g., fluoride and chloride) are less stable at low temperatures and trivalent Eu has an ionic radius similar to that of Ca2+ and the other REE, and so they behave coherently. In contrast, anhydrites that exhibit slight LREE-depletion [(La_bs)/(Yb_bs) = 0.4-1.4] and a distinct negative anomaly [(Eu_bs)/(Eu*_bs) = 0.2-0.8] are inferred to have precipitated from mixes of end-member hydrothermal fluid and conductively heated seawater. The LREE depletion results from the presence of very stable LREE chloro-complexes that effectively limit the availability of the LREE for partitioning into anhydrite. Above 250°C, Eu is present only in divalent form as chloride complexes, and discrimination against Eu2+ is likely due to both the mismatch in ionic radii between Eu2+ and Ca2+, and the strong chloro-complexation of divalent Eu which promotes stability in the fluid and inhibits partitioning of Eu2+ into precipitating anhydrite. These variations in REE behavior attest to rapid fluctuations in thermal regime, fluid flow and mixing in the subsurface of the TAG mound that give rise to heterogeneity in the formation conditions of individual anhydrite crystals.
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
Maritime accidents involving ships carrying passengers may pose a high risk with respect to human casualties. For effective risk mitigation, an insight into the process of risk escalation is needed. This requires a proactive approach when it comes to risk modelling for maritime transportation systems. Most of the existing models are based on historical data on maritime accidents, and thus they can be considered reactive instead of proactive. This paper introduces a systematic, transferable and proactive framework estimating the risk for maritime transportation systems, meeting the requirements stemming from the adopted formal definition of risk. The framework focuses on ship-ship collisions in the open sea, with a RoRo/Passenger ship (RoPax) being considered as the struck ship. First, it covers an identification of the events that follow a collision between two ships in the open sea, and, second, it evaluates the probabilities of these events, concluding by determining the severity of a collision. The risk framework is developed with the use of Bayesian Belief Networks and utilizes a set of analytical methods for the estimation of the risk model parameters. The model can be run with the use of GeNIe software package. Finally, a case study is presented, in which the risk framework developed here is applied to a maritime transportation system operating in the Gulf of Finland (GoF). The results obtained are compared to the historical data and available models, in which a RoPax was involved in a collision, and good agreement with the available records is found.
Resumo:
In order to study late Holocene changes in sediment supply into the northern Arabian Sea, a 5.3 m long gravity core was investigated by high-resolution geochemical and mineralogical techniques. The sediment core was recovered at a water depth of 956 m from the continental slope off Pakistan and covers a time span of 5 kyr. During the late Holocene source areas delivering material to the sampling site did, however, not change and were active throughout the year.