953 resultados para immaterial capabilities
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
As adult height is a well-established retrospective measure of health and standard of living, it is important to understand the factors that determine it. Among them, the influence of socio-environmental factors has been subjected to empirical scrutiny. This paper explores the influence of generational (or environmental) effects and individual and gender-specific heterogeneity on adult height. Our data set is from contemporary Spain, a country governed by an authoritarian regime between 1939 and 1977. First, we use normal position and quantile regression analysis to identify the determinants of self-reported adult height and to measure the influence of individual heterogeneity. Second, we use a Blinder-Oaxaca decomposition approach to explain the `gender height gap¿ and its distribution, so as to measure the influence on this gap of individual heterogeneity. Our findings suggest a significant increase in adult height in the generations that benefited from the country¿s economic liberalization in the 1950s, and especially those brought up after the transition to democracy in the 1970s. In contrast, distributional effects on height suggest that only in recent generations has ¿height increased more among the tallest¿. Although the mean gender height gap is 11 cm, generational effects and other controls such as individual capabilities explain on average roughly 5% of this difference, a figure that rises to 10% in the lowest 10% quantile.
Resumo:
Drilled shafts have been used in the US for more than 100 years in bridges and buildings as a deep foundation alternative. For many of these applications, the drilled shafts were designed using the Working Stress Design (WSD) approach. Even though WSD has been used successfully in the past, a move toward Load Resistance Factor Design (LRFD) for foundation applications began when the Federal Highway Administration (FHWA) issued a policy memorandum on June 28, 2000.The policy memorandum requires all new bridges initiated after October 1, 2007, to be designed according to the LRFD approach. This ensures compatibility between the superstructure and substructure designs, and provides a means of consistently incorporating sources of uncertainty into each load and resistance component. Regionally-calibrated LRFD resistance factors are permitted by the American Association of State Highway and Transportation Officials (AASHTO) to improve the economy and competitiveness of drilled shafts. To achieve this goal, a database for Drilled SHAft Foundation Testing (DSHAFT) has been developed. DSHAFT is aimed at assimilating high quality drilled shaft test data from Iowa and the surrounding regions, and identifying the need for further tests in suitable soil profiles. This report introduces DSHAFT and demonstrates its features and capabilities, such as an easy-to-use storage and sharing tool for providing access to key information (e.g., soil classification details and cross-hole sonic logging reports). DSHAFT embodies a model for effective, regional LRFD calibration procedures consistent with PIle LOad Test (PILOT) database, which contains driven pile load tests accumulated from the state of Iowa. PILOT is now available for broader use at the project website: http://srg.cce.iastate.edu/lrfd/. DSHAFT, available in electronic form at http://srg.cce.iastate.edu/dshaft/, is currently comprised of 32 separate load tests provided by Illinois, Iowa, Minnesota, Missouri and Nebraska state departments of transportation and/or department of roads. In addition to serving as a manual for DSHAFT and providing a summary of the available data, this report provides a preliminary analysis of the load test data from Iowa, and will open up opportunities for others to share their data through this quality–assured process, thereby providing a platform to improve LRFD approach to drilled shafts, especially in the Midwest region.
Resumo:
The purpose of this project was to investigate the potential for collecting and using data from mobile terrestrial laser scanning (MTLS) technology that would reduce the need for traditional survey methods for the development of highway improvement projects at the Iowa Department of Transportation (Iowa DOT). The primary interest in investigating mobile scanning technology is to minimize the exposure of field surveyors to dangerous high volume traffic situations. Issues investigated were cost, timeframe, accuracy, contracting specifications, data capture extents, data extraction capabilities and data storage issues associated with mobile scanning. The project area selected for evaluation was the I-35/IA 92 interchange in Warren County, Iowa. This project covers approximately one mile of I-35, one mile of IA 92, 4 interchange ramps, and bridges within these limits. Delivered LAS and image files for this project totaled almost 31GB. There is nearly a 6-fold increase in the size of the scan data after post-processing. Camera data, when enabled, produced approximately 900MB of imagery data per mile using a 2- camera, 5 megapixel system. A comparison was done between 1823 points on the pavement that were surveyed by Iowa DOT staff using a total station and the same points generated through the MTLS process. The data acquired through the MTLS and data processing met the Iowa DOT specifications for engineering survey. A list of benefits and challenges is included in the detailed report. With the success of this project, it is anticipate[d] that additional projects will be scanned for the Iowa DOT for use in the development of highway improvement projects.
Resumo:
This work is divided into three volumes: Volume I: Strain-Based Damage Detection; Volume II: Acceleration-Based Damage Detection; Volume III: Wireless Bridge Monitoring Hardware. Volume I: In this work, a previously-developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. The statistical damage-detection tool, control-chart-based damage-detection methodologies, were further investigated and advanced. For the validation of the damage-detection approaches, strain data were obtained from a sacrificial specimen attached to the previously-utilized US 30 Bridge over the South Skunk River (in Ames, Iowa), which had simulated damage,. To provide for an enhanced ability to detect changes in the behavior of the structural system, various control chart rules were evaluated. False indications and true indications were studied to compare the damage detection ability in regard to each methodology and each control chart rule. An autonomous software program called Bridge Engineering Center Assessment Software (BECAS) was developed to control all aspects of the damage detection processes. BECAS requires no user intervention after initial configuration and training. Volume II: In this work, a previously developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. The objective of this part of the project was to validate/integrate a vibration-based damage-detection algorithm with the strain-based methodology formulated by the Iowa State University Bridge Engineering Center. This report volume (Volume II) presents the use of vibration-based damage-detection approaches as local methods to quantify damage at critical areas in structures. Acceleration data were collected and analyzed to evaluate the relationships between sensors and with changes in environmental conditions. A sacrificial specimen was investigated to verify the damage-detection capabilities and this volume presents a transmissibility concept and damage-detection algorithm that show potential to sense local changes in the dynamic stiffness between points across a joint of a real structure. The validation and integration of the vibration-based and strain-based damage-detection methodologies will add significant value to Iowa’s current and future bridge maintenance, planning, and management Volume III: In this work, a previously developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. This report volume (Volume III) summarizes the energy harvesting techniques and prototype development for a bridge monitoring system that uses wireless sensors. The wireless sensor nodes are used to collect strain measurements at critical locations on a bridge. The bridge monitoring hardware system consists of a base station and multiple self-powered wireless sensor nodes. The base station is responsible for the synchronization of data sampling on all nodes and data aggregation. Each wireless sensor node include a sensing element, a processing and wireless communication module, and an energy harvesting module. The hardware prototype for a wireless bridge monitoring system was developed and tested on the US 30 Bridge over the South Skunk River in Ames, Iowa. The functions and performance of the developed system, including strain data, energy harvesting capacity, and wireless transmission quality, were studied and are covered in this volume.
Resumo:
ABSTRACT : A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner. The first essay reviews research in social network theory and knowledge transfer management, and identifies the crucial factors of innovation network configuration for a firm's learning performance or innovation output. The findings suggest that network structure, network relationship, and network position all impact on a firm's performance. Although the previous literature indicates that there are disagreements about the impact of dense or spare structure, as well as strong or weak ties, case evidence from Chinese software companies reveals that dense and strong connections with partners are positively associated with firms' performance. The second essay is a theoretical essay that illustrates the limitations of social network theory for explaining the source of network value and offers a new theoretical model that applies resource-based view to network environments. It suggests that network configurations, such as network structure, network relationship and network position, can be considered important network resources. In addition, this essay introduces the concept of network capability, and suggests that four types of network capabilities play an important role in unlocking the potential value of network resources and determining the distribution of network rents between partners. This essay also highlights the contingent effects of network capability on a firm's innovation output, and explains how the different impacts of network capability depend on a firm's strategic choices. This new theoretical model has been pre-tested with a case study of China software industry, which enhances the internal validity of this theory. The third essay addresses the questions of what impact network capability has on firm innovation performance and what are the antecedent factors of network capability. This essay employs a structural equation modelling methodology that uses a sample of 211 Chinese Hi-tech firms. It develops a measurement of network capability and reveals that networked firms deal with cooperation between, and coordination with partners on different levels according to their levels of network capability. The empirical results also suggests that IT maturity, the openness of culture, management system involved, and experience with network activities are antecedents of network capabilities. Furthermore, the two-group analysis of the role of international partner(s) shows that when there is a culture and norm gap between foreign partners, a firm must mobilize more resources and effort to improve its performance with respect to its innovation network. The fourth essay addresses the way in which network capabilities influence firm innovation performance. By using hierarchical multiple regression with data from Chinese Hi-tech firms, the findings suggest that there is a significant partial mediating effect of knowledge transfer on the relationships between network capabilities and innovation performance. The findings also reveal that the impacts of network capabilities divert with the environment and strategic decision the firm has made: exploration or exploitation. Network constructing capability provides a greater positive impact on and yields more contributions to innovation performance than does network operating capability in an exploration network. Network operating capability is more important than network constructing capability for innovative firms in an exploitation network. Therefore, these findings highlight that the firm can shape the innovation network proactively for better benefits, but when it does so, it should adjust its focus and change its efforts in accordance with its innovation purposes or strategic orientation.
Resumo:
SUMMARYAs a result of evolution, humans are equipped with an intricate but very effective immune system with multiple defense mechanisms primarily providing protection from infections. This system comprises various cell types, including T-lymphocytes, which are able to recognize and directly kill infected cells. T-cells are not only able to recognize cells carrying foreign antigens, such as virus-infected cells, but also autologous cells. In autoimmune diseases, e.g. multiple sclerosis, T- cells attack autologous cells and cause the destruction of healthy tissue. To prevent aberrant immune reactions, but also to prevent damage caused by an overreacting immune response against foreign targets, there are multiple systems in place that attenuate T-cell responses.By contrast, anti-self immune responses may be highly welcome in malignant diseases. It has been demonstrated that activated T-cells are able to recognize and lyse tumor cells, and may even lead to successful cure of cancer patients. Through vaccination, and especially with the help of powerful adjuvants, frequencies of tumor-reactive T-cells can be augmented drastically. However, the efficacy of anti-tumor responses is diminished by the same checks and balances preventing the human body from harm induced by overly activated T-cells in infections.In the context of my thesis, we studied spontaneous and vaccination induced T-cell responses in melanoma patients. The aim of my studies was to identify situations of T-cell suppression, and pinpoint immune suppressive mechanisms triggered by malignant diseases. We applied recently developed techniques such as multiparameter flow cytometry and gene arrays, allowing the characterization of tumor-reactive T-cells directly ex vivo. In our project, we determined functional capabilities, protein expression, and gene expression profiles of small numbers of T- cells from metastatic tissue and blood obtained from healthy donors and melanoma patients. We found evidence that tumor-specific T-cells were functionally efficient effector cells in peripheral blood, but severely exhausted in metastatic tissue. Our molecular screening revealed the upregulation of multiple inhibitory receptors on tumor-specific T-cells, likely implied in T-cell exhaustion. Functional attenuation of tumor-specific T-cells via inhibitory receptors depended on the anatomical location and immune suppressive mechanisms in the tumor microenvironment, which appeared more important than self-tolerance and anergy mechanisms. Our data reveal novel potential targets for cancer therapy, and contribute to the understanding of cancer biology.RÉSUMÉAu cours de l'évolution, les êtres humains se sont vus doter d'un système immunitaire complexe mais très efficace, avec de multiples mécanismes de défense, principalement contre les infections. Ce système comprend différents types de cellules, dont les lymphocytes Τ qui sont capables de reconnaître et de tuer directement des cellules infectées. Les cellules Τ reconnaissent non seulement des cellules infectées par des virus, mais également des cellules autologues. Dans le cas de maladies auto-immunes, comme par exemple la sclérose en plaques, les cellules Τ s'attaquent à des cellules autologues, ce qui engendre la destruction des tissus sains. Il existe plusieurs systèmes de contrôle des réponses Τ afin de minimiser les réactions immunitaires aberrantes et d'empêcher les dégâts causés par une réponse immunitaire trop importante contre une cible étrangère.Dans le cas de maladies malignes en revanche, une réponse auto-immune peut être avantageuse. Il a été démontré que les lymphocytes Τ étaient également capables de reconnaître et de tuer des cellules tumorales, pouvant même mener à la guérison d'un patient cancéreux. La vaccination peut augmenter fortement la fréquence des cellules Τ réagissant contre une tumeur, particulièrement si elle est combinée avec des adjuvants puissants. Cependant, l'efficacité d'une réponse antitumorale est atténuée par ces mêmes mécanismes de contrôle qui protègent le corps humain des dégâts causés par des cellules Τ activées trop fortement pendant une infection.Dans le cadre de ma recherche de thèse, nous avons étudié les réponses Τ spontanées et induites par la vaccination dans des patients atteints du mélanome. Le but était d'identifier des conditions dans lesquelles les réponses des cellules Τ seraient atténuées, voire inhibées, et d'élucider les mécanismes de suppression immunitaire engendrés par le cancer. Par le biais de techniques nouvelles comprenant la cryométrie de flux et l'analyse globale de l'expression génique à partir d'un nombre minimal de cellules, il nous fut possible de caractériser des cellules Τ réactives contre des tumeurs directement ex vivo. Nous avons examiné les profiles d'expression de gènes et de protéines, ainsi que les capacités fonctionnelles des cellules Τ isolées à partir de tissus métastatiques et à partir du sang de patients. Nos résultats indiquent que les cellules Τ spécifiques aux antigènes tumoraux sont fonctionnelles dans le sang, mais qu'elles sont épuisées dans les tissus métastatiques. Nous avons découvert dans les cellules Τ antitumorales une augmentation de l'expression des récepteurs inhibiteurs probablement impliqués dans l'épuisement de ces lymphocytes T. Cette expression particulière de récepteurs inhibiteurs dépendrait donc de leur localisation anatomique et des mécanismes de suppression existant dans l'environnement immédiat de la tumeur. Nos données révèlent ainsi de nouvelles cibles potentielles pour l'immunothérapie du cancer et contribuent à la compréhension biologique du cancer.
Resumo:
Calcium magnesium acetate (CMA) has been identified by Bjorksten Research Laboratories as an environmentally harmless alternative to sodium or calcium chloride for deicing highways. Their study found CMA to be noncorrosive to steel, aluminum and zinc with little or no anticipated environmental impact. When used, it degrades into elements found in abundance in nature. The deicing capabilities were found to be similar to sodium chloride. The neutralized CMA they produced did cause scaling of PC concrete, but they did not expect mildly alkaline CMA to have this effect. In the initial investigation of CMA at the Iowa DOT laboratory, it was found that CMA produced from hydrated lime and acetic acid was a light, fluffy material. It was recognized that a deicer in this form would be difficult to effectively distribute on highways without considerable wind loss. A process was developed to produce CMA in the presence of sand to increase particle weight. In this report the product of this process, which consists of sand particles coated with CMA, is referred to as "CMA deicer". The mixture of salts, calcium magnesium acetate, is referred to as "CMA". The major problems with CMA for deicing are: (1) it is not commercially available, (2) it is expensive with present production methods and (3) there is very little known about how it performs on highways under actual deicing conditions. In view of the potential benefits this material offers, it is highly desirable to find solutions or answers to these problems. This study provides information to advance that effort. The study consisted of four principal tasks which were: 1. Production of CMA Deicer The objective was to further develop the laboratory process for producing CMA deicer on a pilot plant basis and to produce a sufficient quantity for field trials. The original proposal called for producing 20 tons of CMA deicer. 2. Field Evaluation of CMA Deicer The objective was to evaluate the effectiveness of CMA deicer when used under field conditions and obtain information on application procedures. Performance was compared with a regular 50/50 mixture of sand and sodium chloride. 3. Investigation of Effects of CMA on PC Concrete The objective was to determine any scaling effect that mildly alkaline CMA might have on PC concrete. Comparison was made with calcium chloride. 4. Determine Feasibility of Producing High Magnesium CMA The objective was to investigate the possibility of producing a CMA deicer with magnesium acetate content well above that produced from dolomitic lime. A high magnesium acetate content is desirable because pure magnesium acetate has a water eutectic of -22 F° as compared with +5 F° for calcium acetate and is therefore a more effective deicer.
Resumo:
Excessive speed on State and County highways is recognized as a serious problem by many Iowans. Speed increases both the risk and severity of accidents. Studies conducted by the FHWA and NHTSA have concluded that if average speeds were increased by five MPH, fatalities would increase by at least 2,200 annually. Along with the safety problems associated with excessive speed are important energy considerations. When the national speed limit was lowered to 55 MPH in 1974, a tremendous savings in fuel was realized. The estimated actual savings for automobiles amounted to 2.2 billion gallons, an average of 20.75 gallons for each of the 106 million automobiles registered in 1975. These benefits prompted the Federal-Aid Amendment of 1974 requiring annual State enforcement certification as a prerequisite for approval of Federal-aid highway projects. In 1978, the United States D.O.T. recommended to Congress significant changes in speed limit legislation designed to increase compliance with the national speed limit. The Highway Safety Act of 1978 provides for both withholding Federal-aid highway funds and awarding incentive grants based on speed compliance data submitted annually. The objective of this study was to develop and make operational, an automatic speed monitoring system which would have flexible capabilities of collecting accurate speed data on all road systems in Iowa. It was concluded that the Automatic Speed Monitoring Program in Iowa has been successful and needed data is being collected in the most economical manner possible.
Resumo:
ISU’s proposed research will (1) develop methods for designing clean and efficient burners for low‐Btu producer gas and medium‐Btu syngas, (2) develop catalysts and flow reactors to produce ethanol from medium‐Btu synthesis gas, and (3) upgrade the BECON gasifier system to enable medium‐Btu syngas production and greatly enhanced capabilities for detailed gas analysis needed by both (1) and (2). This project addresses core development needs to enable grain ethanol industry reduce its natural gas demand and ultimately transition to cellulosic ethanol production.
Resumo:
This project was initiated in 1988 to study the effectiveness of four different construction techniques for establishing a stable base on a granular surfaced roadway. After base stabilization, the roadway was then seal coated, eliminating dust problems associated with granular surfaced roads. When monies become available, the roadway can be surfaced with a more permanent structure. A 2.8 mi (4.5 km) section of the Horseshoe Road in Dubuque County was divided into four divisions for this study. This report discusses the procedures used during construction of these different divisions. Problems and possible solutions have been analyzed to better understand the capabilities of the materials and construction techniques used on the project. The project had the following results: High structural ratings and soil K factors for the BIO CAT and Consolid bases did not translate to good roadway performance; the macadam base had the best overall performance; the tensar fabric had no noticeable effect on the macadam base; and the HFE-300 performed acceptably.
Resumo:
This report describes a new approach to the problem of scheduling highway construction type projects. The technique can accurately model linear activities and identify the controlling activity path on a linear schedule. Current scheduling practices are unable to accomplish these two tasks with any accuracy for linear activities, leaving planners and manager suspicious of the information they provide. Basic linear scheduling is not a new technique, and many attempts have been made to apply it to various types of work in the past. However, the technique has never been widely used because of the lack of an analytical approach to activity relationships and development of an analytical approach to determining controlling activities. The Linear Scheduling Model (LSM) developed in this report, completes the linear scheduling technique by adding to linear scheduling all of the analytical capabilities, including computer applications, present in CPM scheduling today. The LSM has tremendous potential, and will likely have a significant impact on the way linear construction is scheduled in the future.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
4.6 Summary and Conclusion In this chapter, we have first tried to make precise the distinctions between the concepts of parthood and coincidence and the concepts of causation and causal influence. These distinc-tions had never been made entirely explicit in the debate on mental causation before, despite the fact that they constantly figure in its background. Section 4.2 then demonstrated that the at-tained definitions are both compatible with all the solutions elaborated in chapters 2 and 3 and that they are even of great help in clarifying both what precisely the mentioned accounts are claiming respectively and what their mutual connections are. In sections 4.3. and 4.4, we have then tried to explore two possible solutions to the problem of mental causation that, at least in these particular versions, have not been explicitly defended in the literature. These solutions we dubbed "overdeteiminationism lite" and "plural determinism". We found the accounts both to bear impressive explanatory capabilities and to be vulnerable to far fewer problems than is commonly supposed. We also found out that they have many corresponding aspects and that their theoretical costs stand in a relation of a relative mutual balance. Our final discussion in section 4.5 revealed, however, that overdetenninationism lite should probably be considered the more successful theory. The fact that it needs to endorse the existence of two kinds of causation, although not unproblematic itself, did not appear as a commitment as strong as that of an ontological hierarchy that extends over all time, which at least the broad version of plural determinism was forced to make.