32 resultados para Hardware reconfigurable


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Today, recognition and classification of sequence motifs and protein folds is a mature field, thanks to the availability of numerous comprehensive and easy to use software packages and web-based services. Recognition of structural motifs, by comparison, is less well developed and much less frequently used, possibly due to a lack of easily accessible and easy to use software. RESULTS: In this paper, we describe an extension of DeepView/Swiss-PdbViewer through which structural motifs may be defined and searched for in large protein structure databases, and we show that common structural motifs involved in stabilizing protein folds are present in evolutionarily and structurally unrelated proteins, also in deeply buried locations which are not obviously related to protein function. CONCLUSIONS: The possibility to define custom motifs and search for their occurrence in other proteins permits the identification of recurrent arrangements of residues that could have structural implications. The possibility to do so without having to maintain a complex software/hardware installation on site brings this technology to experts and non-experts alike.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current limitations of coronary magnetic resonance angiography (MRA) include a suboptimal signal-to-noise ratio (SNR), which limits spatial resolution and the ability to visualize distal and branch vessel coronary segments. Improved SNR is expected at higher field strengths, which may provide improved spatial resolution. However, a number of potential adverse effects on image quality have been reported at higher field strengths. The limited availability of high-field systems equipped with cardiac-specific hardware and software has previously precluded successful in vivo human high-field coronary MRA data acquisition. In the present study we investigated the feasibility of human coronary MRA at 3.0 T in vivo. The first results obtained in nine healthy adult subjects are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract in English : Ubiquitous Computing is the emerging trend in computing systems. Based on this observation this thesis proposes an analysis of the hardware and environmental constraints that rule pervasive platforms. These constraints have a strong impact on the programming of such platforms. Therefore solutions are proposed to facilitate this programming both at the platform and node levels. The first contribution presented in this document proposes a combination of agentoriented programming with the principles of bio-inspiration (Phylogenesys, Ontogenesys and Epigenesys) to program pervasive platforms such as the PERvasive computing framework for modeling comPLEX virtually Unbounded Systems platform. The second contribution proposes a method to program efficiently parallelizable applications on each computing node of this platform. Résumé en Français : Basée sur le constat que les calculs ubiquitaires vont devenir le paradigme de programmation dans les années à venir, cette thèse propose une analyse des contraintes matérielles et environnementale auxquelles sont soumises les plateformes pervasives. Ces contraintes ayant un impact fort sur la programmation des plateformes. Des solutions sont donc proposées pour faciliter cette programmation tant au niveau de l'ensemble des noeuds qu'au niveau de chacun des noeuds de la plateforme. La première contribution présentée dans ce document propose d'utiliser une alliance de programmation orientée agent avec les grands principes de la bio-inspiration (Phylogénèse, Ontogénèse et Épigénèse). Ceci pour répondres aux contraintes de programmation de plateformes pervasives comme la plateforme PERvasive computing framework for modeling comPLEX virtually Unbounded Systems . La seconde contribution propose quant à elle une méthode permettant de programmer efficacement des applications parallélisable sur chaque noeud de calcul de la plateforme

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to prospectively evaluate the accuracy and predictability of new three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction.We analyzed the preoperative and postoperative clinical and radiologic data of 10 patients with isolated blow-out orbital fractures. Fracture locations were as follows: floor (N = 7; 70%), medial wall (N = 1; 1%), and floor/medial wall (N = 2; 2%). The floor fractures were exposed by a standard transconjunctival approach, whereas a combined transcaruncular transconjunctival approach was used in patients with medial wall fractures. A three-dimensional preformed AO titanium mesh plate (0.4 mm in thickness) was selected according to the size of the defect previously measured on the preoperative computed tomographic (CT) scan examination and fixed at the inferior orbital rim with 1 or 2 screws. The accuracy of plate positioning of the reconstructed orbit was assessed on the postoperative CT scan. Coronal CT scan slices were used to measure bony orbital volume using OsiriX Medical Image software. Reconstructed versus uninjured orbital volume were statistically correlated.Nine patients (90%) had a successful treatment outcome without complications. One patient (10%) developed a mechanical limitation of upward gaze with a resulting handicapping diplopia requiring hardware removal. Postoperative orbital CT scan showed an anatomic three-dimensional placement of the orbital mesh plates in all of the patients. Volume data of the reconstructed orbit fitted that of the contralateral uninjured orbit with accuracy to within 2.5 cm(3). There was no significant difference in volume between the reconstructed and uninjured orbits.This preliminary study has demonstrated that three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction results in (1) a high rate of success with an acceptable rate of major clinical complications (10%) and (2) an anatomic restoration of the bony orbital contour and volume that closely approximates that of the contralateral uninjured orbit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To report a single-center experience treating patients with squamous- cell carcinoma of the anal canal using helical Tomotherapy (HT) and concurrent chemotherapy (CT).Materials/Methods: From October 2007 to February 2011, 55 patients were treated with HT and concurrent CT (5-fluorouracil/capecitabin and mitomycin) for anal squamous-cell carcinoma. All patients underwent computed- tomography-based treatment planning, with pelvic and inguinal nodes receiving 36 Gy in 1.8 Gy/fraction. Following a planned 1-week break, primary tumor site and involved nodes were boosted to a total dose 59.4 Gy in 1.8 Gy/fraction. Dose-volume histograms of several organs at risk (OAR; bladder, small intestine, rectum, femoral heads, penile bulb, external genitalia) were assessed in terms of conformal avoidance. All toxicity was scored according to the CTCAE, v.3.0. HT plans and treatment were implemented using the Tomotherapy, Inc. software and hardware. For dosimetric comparisons, 3D RT and/or IMRT plans were also computed for some of the patients using the CMS planning system, for treatment with 6-18 MV photons and/or electrons with suitable energies from a Siemens Primus linear accelerator equipped with a multileaf collimator.Locoregional control and survival curves were compared with the log-rank test, and multivariate analysis by the Cox model.Results: With 360-degree-of-freedom beam projection, HT has an advantage over other RT techniques (3D or 5-field step-and-shot IMRT). There is significant improvement over 3D or 5-field IMRT plans in terms of dose conformity around the PTV, and dose gradients are steeper outside the target volume, resulting in reduced doses to OARs. Using HT, acute toxicity was acceptable, and seemed to be better than historical standards.Conclusions: Our results suggest that HT combined with concurrent CT for anal cancer is effective and tolerable. Compared to 3D RT or 5-field step-andshot IMRT, there is better conformity around the PTV, and better OAR sparing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex structural organization of the white matter of the brain can be depicted in vivo in great detail with advanced diffusion magnetic resonance (MR) imaging schemes. Diffusion MR imaging techniques are increasingly varied, from the simplest and most commonly used technique-the mapping of apparent diffusion coefficient values-to the more complex, such as diffusion tensor imaging, q-ball imaging, diffusion spectrum imaging, and tractography. The type of structural information obtained differs according to the technique used. To fully understand how diffusion MR imaging works, it is helpful to be familiar with the physical principles of water diffusion in the brain and the conceptual basis of each imaging technique. Knowledge of the technique-specific requirements with regard to hardware and acquisition time, as well as the advantages, limitations, and potential interpretation pitfalls of each technique, is especially useful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generalized Born methods are currently among the solvation models most commonly used for biological applications. We reformulate the generalized Born molecular volume method initially described by (Lee et al, 2003, J Phys Chem, 116, 10606; Lee et al, 2003, J Comp Chem, 24, 1348) using fast Fourier transform convolution integrals. Changes in the initial method are discussed and analyzed. Finally, the method is extensively checked with snapshots from common molecular modeling applications: binding free energy computations and docking. Biologically relevant test systems are chosen, including 855-36091 atoms. It is clearly demonstrated that, precision-wise, the proposed method performs as good as the original, and could better benefit from hardware accelerated boards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Following the success of the first round table in 2001, the Swiss Proteomic Society has organized two additional specific events during its last two meetings: a proteomic application exercise in 2002 and a round table in 2003. Such events have as their main objective to bring together, around a challenging topic in mass spectrometry, two groups of specialists, those who develop and commercialize mass spectrometry equipment and software, and expert MS users for peptidomics and proteomics studies. The first round table (Geneva, 2001) entitled "Challenges in Mass Spectrometry" was supported by brief oral presentations that stressed critical questions in the field of MS development or applications (Stöcklin and Binz, Proteomics 2002, 2, 825-827). Topics such as (i) direct analysis of complex biological samples, (ii) status and perspectives for MS investigations of noncovalent peptide-ligant interactions; (iii) is it more appropriate to have complementary instruments rather than a universal equipment, (iv) standardization and improvement of the MS signals for protein identification, (v) what would be the new generation of equipment and finally (vi) how to keep hardware and software adapted to MS up-to-date and accessible to all. For the SPS'02 meeting (Lausanne, 2002), a full session alternative event "Proteomic Application Exercise" was proposed. Two different samples were prepared and sent to the different participants: 100 micro g of snake venom (a complex mixture of peptides and proteins) and 10-20 micro g of almost pure recombinant polypeptide derived from the shrimp Penaeus vannamei carrying an heterogeneous post-translational modification (PTM). Among the 15 participants that received the samples blind, eight returned results and most of them were asked to present their results emphasizing the strategy, the manpower and the instrumentation used during the congress (Binz et. al., Proteomics 2003, 3, 1562-1566). It appeared that for the snake venom extract, the quality of the results was not particularly dependant on the strategy used, as all approaches allowed Lication of identification of a certain number of protein families. The genus of the snake was identified in most cases, but the species was ambiguous. Surprisingly, the precise identification of the recombinant almost pure polypeptides appeared to be much more complicated than expected as only one group reported the full sequence. Finally the SPS'03 meeting reported here included a round table on the difficult and challenging task of "Quantification by Mass Spectrometry", a discussion sustained by four selected oral presentations on the use of stable isotopes, electrospray ionization versus matrix-assisted laser desorption/ionization approaches to quantify peptides and proteins in biological fluids, the handling of differential two-dimensional liquid chromatography tandem mass spectrometry data resulting from high throughput experiments, and the quantitative analysis of PTMs. During these three events at the SPS meetings, the impressive quality and quantity of exchanges between the developers and providers of mass spectrometry equipment and software, expert users and the audience, were a key element for the success of these fruitful events and will have definitively paved the way for future round tables and challenging exercises at SPS meetings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T(1)-weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Building online courses is a highly time consuming task for teachers of a single university. Universities working alone create high-quality courses but often cannot cover all pathological fields. Moreover this often leads to duplication of contents among universities, representing a big waste of teacher time and energy. We initiated in 2011 a French university network for building mutualized online teaching pathology cases, and this network has been extended in 2012 to Quebec and Switzerland. Method: Twenty French universities (see & for details), University Laval in Quebec and University of Lausanne in Switzerland are associated to this project. One e-learning Moodle platform (http://moodle.sorbonne-paris-cite.fr/) contains texts with URL pointing toward virtual slides that are decentralized in several universities. Each university has the responsibility of its own slide scanning, slide storage and online display with virtual slide viewers. The Moodle website is hosted by PRES Sorbonne Paris Cité, and financial supports for hardware have been obtained from UNF3S (http://www.unf3s.org/) and from PRES Sorbonne Paris Cité. Financial support for international fellowships has been obtained from CFQCU (http://www.cfqcu.org/). Results: The Moodle interface has been explained to pathology teachers using web-based conferences with screen sharing. The teachers added then contents such as clinical cases, selfevaluations and other media organized in several sections by student levels and pathological fields. Contents can be used as online learning or online preparation of subsequent courses in classrooms. In autumn 2013, one resident from Quebec spent 6 weeks in France and Switzerland and created original contents in inflammatory skin pathology. These contents are currently being validated by senior teachers and will be opened to pathology residents in spring 2014. All contents of the website can be accessed for free. Most contents just require anonymous connection but some specific fields, especially those containing pictures obtained from patients who agreed for a teaching use only, require personal identification of the students. Also, students have to register to access Moodle tests. All contents are written in French but one case has been translated into English to illustrate this communication (http://moodle.sorbonne-pariscite.fr/mod/page/view.php?id=261) (use "login as a guest"). The Moodle test module allows many types of shared questions, making it easy to create personalized tests. Contents that are opened to students have been validated by an editorial committee composed of colleagues from the participating institutions. Conclusions: Future developments include other international fellowships, the next one being scheduled for one French resident from May to October 2014 in Quebec, with a study program centered on lung and breast pathology. It must be kept in mind that these e-learning programs highly depend on teachers' time, not only at these early steps but also later to update the contents. We believe that funding resident fellowships for developing online pathological teaching contents is a win-win situation, highly beneficial for the resident who will improve his knowledge and way of thinking, highly beneficial for the teachers who will less worry about access rights or image formats, and finally highly beneficial for the students who will get courses fully adapted to their practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controversy exists about the best method to achieve bone fusion in four-corner arthrodesis. Thirty-five patients who underwent this procedure by our technique were included in the study. Surgical indications were stage II-III SLAC wrist, stage II SNAC wrist and severe traumatic midcarpal joint injury. Mean follow-up was 4.6 years. Mean active flexion and extension were 34 degrees and 30 degrees respectively; grip strength recovery was 79%. Radiological consolidation was achieved in all cases. The mean DASH score was 23 and the postoperative pain improvement by visual analogue scale was statistically significant. Return to work was possible at 4 months for the average patient. Complications were a capitate fracture in one patient and the need for hardware removal in four cases. Four-corner bone wrist arthrodesis by dorsal rectangular plating achieves an acceptable preservation of range of motion with good pain relief, an excellent consolidation rate and minimal complications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new approach and related indicators for globally distributed software support and development based on a 3-year process improvement project in a globally distributed engineering company. The company develops, delivers and supports a complex software system with tailored hardware components and unique end-customer installations. By applying the domain knowledge from operations management on lead time reduction and its multiple benefits to process performance, the workflows of globally distributed software development and multitier support processes were measured and monitored throughout the company. The results show that the global end-to-end process visibility and centrally managed reporting at all levels of the organization catalyzed a change process toward significantly better performance. Due to the new performance indicators based on lead times and their variation with fixed control procedures, the case company was able to report faster bug-fixing cycle times, improved response times and generally better customer satisfaction in its global operations. In all, lead times to implement new features and to respond to customer issues and requests were reduced by 50%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT: Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. Pruning starts near time of birth and is completed by time of sexual maturation. Trigger signals able to induce synaptic pruning could be related to dynamic functions that depend on the timing of action potentials. Spike-timing-dependent synaptic plasticity (STDP) is a change in the synaptic strength based on the ordering of pre- and postsynaptic spikes. The relation between synaptic efficacy and synaptic pruning suggests that the weak synapses may be modified and removed through competitive "learning" rules. This plasticity rule might produce the strengthening of the connections among neurons that belong to cell assemblies characterized by recurrent patterns of firing. Conversely, the connections that are not recurrently activated might decrease in efficiency and eventually be eliminated. The main goal of our study is to determine whether or not, and under which conditions, such cell assemblies may emerge out of a locally connected random network of integrate-and-fire units distributed on a 2D lattice receiving background noise and content-related input organized in both temporal and spatial dimensions. The originality of our study stands on the relatively large size of the network, 10,000 units, the duration of the experiment, 10E6 time units (one time unit corresponding to the duration of a spike), and the application of an original bio-inspired STDP modification rule compatible with hardware implementation. A first batch of experiments was performed to test that the randomly generated connectivity and the STDP-driven pruning did not show any spurious bias in absence of stimulation. Among other things, a scale factor was approximated to compensate for the network size on the ac¬tivity. Networks were then stimulated with the spatiotemporal patterns. The analysis of the connections remaining at the end of the simulations, as well as the analysis of the time series resulting from the interconnected units activity, suggest that feed-forward circuits emerge from the initially randomly connected networks by pruning. RESUME: L'élagage massif des synapses après une croissance excessive est une phase normale de la ma¬turation du cerveau des mammifères. L'élagage commence peu avant la naissance et est complété avant l'âge de la maturité sexuelle. Les facteurs déclenchants capables d'induire l'élagage des synapses pourraient être liés à des processus dynamiques qui dépendent de la temporalité rela¬tive des potentiels d'actions. La plasticité synaptique à modulation temporelle relative (STDP) correspond à un changement de la force synaptique basé sur l'ordre des décharges pré- et post- synaptiques. La relation entre l'efficacité synaptique et l'élagage des synapses suggère que les synapses les plus faibles pourraient être modifiées et retirées au moyen d'une règle "d'appren¬tissage" faisant intervenir une compétition. Cette règle de plasticité pourrait produire le ren¬forcement des connexions parmi les neurones qui appartiennent à une assemblée de cellules caractérisée par des motifs de décharge récurrents. A l'inverse, les connexions qui ne sont pas activées de façon récurrente pourraient voir leur efficacité diminuée et être finalement éliminées. Le but principal de notre travail est de déterminer s'il serait possible, et dans quelles conditions, que de telles assemblées de cellules émergent d'un réseau d'unités integrate-and¬-fire connectées aléatoirement et distribuées à la surface d'une grille bidimensionnelle recevant à la fois du bruit et des entrées organisées dans les dimensions temporelle et spatiale. L'originalité de notre étude tient dans la taille relativement grande du réseau, 10'000 unités, dans la durée des simulations, 1 million d'unités de temps (une unité de temps correspondant à une milliseconde), et dans l'utilisation d'une règle STDP originale compatible avec une implémentation matérielle. Une première série d'expériences a été effectuée pour tester que la connectivité produite aléatoirement et que l'élagage dirigé par STDP ne produisaient pas de biais en absence de stimu¬lation extérieure. Entre autres choses, un facteur d'échelle a pu être approximé pour compenser l'effet de la variation de la taille du réseau sur son activité. Les réseaux ont ensuite été stimulés avec des motifs spatiotemporels. L'analyse des connexions se maintenant à la fin des simulations, ainsi que l'analyse des séries temporelles résultantes de l'activité des neurones, suggèrent que des circuits feed-forward émergent par l'élagage des réseaux initialement connectés au hasard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Imaging in neuroscience, clinical research and pharmaceutical trials often employs the 3D magnetisation-prepared rapid gradient-echo (MPRAGE) sequence to obtain structural T1-weighted images with high spatial resolution of the human brain. Typical research and clinical routine MPRAGE protocols with ~1mm isotropic resolution require data acquisition time in the range of 5-10min and often use only moderate two-fold acceleration factor for parallel imaging. Recent advances in MRI hardware and acquisition methodology promise improved leverage of the MR signal and more benign artefact properties in particular when employing increased acceleration factors in clinical routine and research. In this study, we examined four variants of a four-fold-accelerated MPRAGE protocol (2D-GRAPPA, CAIPIRINHA, CAIPIRINHA elliptical, and segmented MPRAGE) and compared clinical readings, basic image quality metrics (SNR, CNR), and automated brain tissue segmentation for morphological assessments of brain structures. The results were benchmarked against a widely-used two-fold-accelerated 3T ADNI MPRAGE protocol that served as reference in this study. 22 healthy subjects (age=20-44yrs.) were imaged with all MPRAGE variants in a single session. An experienced reader rated all images of clinically useful image quality. CAIPIRINHA MPRAGE scans were perceived on average to be of identical value for reading as the reference ADNI-2 protocol. SNR and CNR measurements exhibited the theoretically expected performance at the four-fold acceleration. The results of this study demonstrate that the four-fold accelerated protocols introduce systematic biases in the segmentation results of some brain structures compared to the reference ADNI-2 protocol. Furthermore, results suggest that the increased noise levels in the accelerated protocols play an important role in introducing these biases, at least under the present study conditions.