165 resultados para Emulsion template
Resumo:
Brain asymmetry, or the structural and functional specialization of each brain hemisphere, has fascinated neuroscientists for over a century. Even so, genetic and environmental factors that influence brain asymmetry are largely unknown. Diffusion tensor imaging (DTI) now allows asymmetry to be studied at a microscopic scale by examining differences in fiber characteristics across hemispheres rather than differences in structure shapes and volumes. Here we analyzed 4. Tesla DTI scans from 374 healthy adults, including 60 monozygotic twin pairs, 45 same-sex dizygotic pairs, and 164 mixed-sex DZ twins and their siblings; mean age: 24.4 years ± 1.9 SD). All DTI scans were nonlinearly aligned to a geometrically-symmetric, population-based image template. We computed voxel-wise maps of significant asymmetries (left/right differences) for common diffusion measures that reflect fiber integrity (fractional and geodesic anisotropy; FA, GA and mean diffusivity, MD). In quantitative genetic models computed from all same-sex twin pairs (N=210 subjects), genetic factors accounted for 33% of the variance in asymmetry for the inferior fronto-occipital fasciculus, 37% for the anterior thalamic radiation, and 20% for the forceps major and uncinate fasciculus (all L > R). Shared environmental factors accounted for around 15% of the variance in asymmetry for the cortico-spinal tract (R > L) and about 10% for the forceps minor (L > R). Sex differences in asymmetry (men > women) were significant, and were greatest in regions with prominent FA asymmetries. These maps identify heritable DTI-derived features, and may empower genome-wide searches for genetic polymorphisms that influence brain asymmetry.
Resumo:
Studies of cerebral asymmetry can open doors to understanding the functional specialization of each brain hemisphere, and how this is altered in disease. Here we examined hemispheric asymmetries in fiber architecture using diffusion tensor imaging (DTI) in 100 subjects, using high-dimensional fluid warping to disentangle shape differences from measures sensitive to myelination. Confounding effects of purely structural asymmetries were reduced by using co-registered structural images to fluidly warp 3D maps of fiber characteristics (fractional and geodesic anisotropy) to a structurally symmetric minimal deformation template (MDT). We performed a quantitative genetic analysis on 100 subjects to determine whether the sources of the remaining signal asymmetries were primarily genetic or environmental. A twin design was used to identify the heritable features of fiber asymmetry in various regions of interest, to further assist in the discovery of genes influencing brain micro-architecture and brain lateralization. Genetic influences and left/right asymmetries were detected in the fiber architecture of the frontal lobes, with minor differences depending on the choice of registration template.
Resumo:
Brain asymmetry has been a topic of interest for neuroscientists for many years. The advent of diffusion tensor imaging (DTI) allows researchers to extend the study of asymmetry to a microscopic scale by examining fiber integrity differences across hemispheres rather than the macroscopic differences in shape or structure volumes. Even so, the power to detect these microarchitectural differences depends on the sample size and how the brain images are registered and how many subjects are studied. We fluidly registered 4 Tesla DTI scans from 180 healthy adult twins (45 identical and fraternal pairs) to a geometrically-centered population mean template. We computed voxelwise maps of significant asymmetries (left/right hemisphere differences) for common fiber anisotropy indices (FA, GA). Quantitative genetic models revealed that 47-62% of the variance in asymmetry was due to genetic differences in the population. We studied how these heritability estimates varied with the type of registration target (T1- or T2-weighted) and with sample size. All methods consistently found that genetic factors strongly determined the lateralization of fiber anisotropy, facilitating the quest for specific genes that might influence brain asymmetry and fiber integrity.
Resumo:
We used diffusion tensor magnetic resonance imaging (DTI) to reveal the extent of genetic effects on brain fiber microstructure, based on tensor-derived measures, in 22 pairs of monozygotic (MZ) twins and 23 pairs of dizygotic (DZ) twins (90 scans). After Log-Euclidean denoising to remove rank-deficient tensors, DTI volumes were fluidly registered by high-dimensional mapping of co-registered MP-RAGE scans to a geometrically-centered mean neuroanatomical template. After tensor reorientation using the strain of the 3D fluid transformation, we computed two widely used scalar measures of fiber integrity: fractional anisotropy (FA), and geodesic anisotropy (GA), which measures the geodesic distance between tensors in the symmetric positive-definite tensor manifold. Spatial maps of intraclass correlations (r) between MZ and DZ twins were compared to compute maps of Falconer's heritability statistics, i.e. the proportion of population variance explainable by genetic differences among individuals. Cumulative distribution plots (CDF) of effect sizes showed that the manifold measure, GA, comparably the Euclidean measure, FA, in detecting genetic correlations. While maps were relatively noisy, the CDFs showed promise for detecting genetic influences on brain fiber integrity as the current sample expands.
Resumo:
Information from the full diffusion tensor (DT) was used to compute voxel-wise genetic contributions to brain fiber microstructure. First, we designed a new multivariate intraclass correlation formula in the log-Euclidean framework. We then analyzed used the full multivariate structure of the tensor in a multivariate version of a voxel-wise maximum-likelihood structural equation model (SEM) that computes the variance contributions in the DTs from genetic (A), common environmental (C) and unique environmental (E) factors. Our algorithm was tested on DT images from 25 identical and 25 fraternal twin pairs. After linear and fluid registration to a mean template, we computed the intraclass correlation and Falconer's heritability statistic for several scalar DT-derived measures and for the full multivariate tensors. Covariance matrices were found from the DTs, and inputted into SEM. Analyzing the full DT enhanced the detection of A and C effects. This approach should empower imaging genetics studies that use DTI.
Resumo:
We present a new algorithm to compute the voxel-wise genetic contribution to brain fiber microstructure using diffusion tensor imaging (DTI) in a dataset of 25 monozygotic (MZ) twins and 25 dizygotic (DZ) twin pairs (100 subjects total). First, the structural and DT scans were linearly co-registered. Structural MR scans were nonlinearly mapped via a 3D fluid transformation to a geometrically centered mean template, and the deformation fields were applied to the DTI volumes. After tensor re-orientation to realign them to the anatomy, we computed several scalar and multivariate DT-derived measures including the geodesic anisotropy (GA), the tensor eigenvalues and the full diffusion tensors. A covariance-weighted distance was measured between twins in the Log-Euclidean framework [2], and used as input to a maximum-likelihood based algorithm to compute the contributions from genetics (A), common environmental factors (C) and unique environmental ones (E) to fiber architecture. Quanititative genetic studies can take advantage of the full information in the diffusion tensor, using covariance weighted distances and statistics on the tensor manifold.
Resumo:
Schooling is one of the core experiences of most young people in the Western world. This study examines the ways that students inhabit subjectivities defined in their relationship to some normalised good student. The idea that schools exist to produce students who become good citizens is one of the basic tenets of modernist educational philosophies that dominate the contemporary education world. The school has become a political site where policy, curriculum orientations, expectations and philosophies of education contest for the ‘right’ way to school and be schooled. For many people, schools and schooling only make sense if they resonate with past experiences. The good student is framed within these aspects of cultural understanding. However, this commonsense attitude is based on a hegemonic understanding of the good, rather than the good student as a contingent multiplicity that is produced by an infinite set of discourses and experiences. In this book, author Greg Thompson argues that this understanding of subjectivities and power is crucial if schools are to meet the needs of a rapidly changing and challenging world. As a high school teacher for many years, Thompson often wondered how students responded to complex articulations on how to be a good student. How a student can be considered good is itself an articulation of powerful discourses that compete within the school. Rather than assuming a moral or ethical citizen, this study turns that logic on it on its head to ask students in what ways they can be good within the school. Visions of the good student deployed in various ways in schools act to produce various ways of knowing the self as certain types of subjects. Developing the postmodern theories of Foucault and Deleuze, this study argues that schools act to teach students to know themselves in certain idealised ways through which they are located, and locate themselves, in hierarchical rationales of the good student. Problematising the good student in high schools engages those institutional discourses with the philosophy, history and sociology of education. Asking students how they negotiate or perform their selves within schools challenges the narrow and limiting ways that the good is often understood. By pushing the ontological understandings of the self beyond the modernist philosophies that currently dominate schools and schooling, this study problematises the tendency to see students as fixed, measurable identities (beings) rather than dynamic, evolving performances (becomings). Who is the Good High School Student? is an important book for scholars conducting research on high school education, as well as student-teachers, teacher educators and practicing teachers alike.
Resumo:
Highly conductive, transparent and flexible planar electrodes were fabricated using interwoven silver nanowires and single-walled carbon nanotubes (AgNW:SWCNT) in a PEDOT:PSS matrix via an epoxy transfer method from a silicon template. The planar electrodes achieved a sheet resistance of 6.6 ± 0.0 Ω/squ and an average transmission of 86% between 400 and 800 nm. A high figure of merit of 367 Ω−1 is reported for the electrodes, which is much higher than that measured for indium tin oxide and reported for other AgNW composites. The AgNW:SWCNT:PEDOT:PSS electrode was used to fabricate low temperature (annealing free) devices demonstrating their potential to function with a range of organic semiconducting polymer:fullerene bulk heterojunction blend systems.
Resumo:
As critical infrastructure such as transportation hubs continue to grow in complexity, greater importance is placed on monitoring these facilities to ensure their secure and efficient operation. In order to achieve these goals, technology continues to evolve in response to the needs of various infrastructure. To date, however, the focus of technology for surveillance has been primarily concerned with security, and little attention has been placed on assisting operations and monitoring performance in real-time. Consequently, solutions have emerged to provide real-time measurements of queues and crowding in spaces, but have been installed as system add-ons (rather than making better use of existing infrastructure), resulting in expensive infrastructure outlay for the owner/operator, and an overload of surveillance systems which in itself creates further complexity. Given many critical infrastructure already have camera networks installed, it is much more desirable to better utilise these networks to address operational monitoring as well as security needs. Recently, a growing number of approaches have been proposed to monitor operational aspects such as pedestrian throughput, crowd size and dwell times. In this paper, we explore how these techniques relate to and complement the more commonly seen security analytics, and demonstrate the value that can be added by operational analytics by demonstrating their performance on airport surveillance data. We explore how multiple analytics and systems can be combined to better leverage the large amount of data that is available, and we discuss the applicability and resulting benefits of the proposed framework for the ongoing operation of airports and airport networks.
Resumo:
The secretive 2011 Anti-Counterfeiting Trade Agreement – known in short by the catchy acronym ACTA – is a controversial trade pact designed to provide for stronger enforcement of intellectual property rights. The preamble to the treaty reads like pulp fiction – it raises moral panics about piracy, counterfeiting, organised crime, and border security. The agreement contains provisions on civil remedies and criminal offences; copyright law and trademark law; the regulation of the digital environment; and border measures. Memorably, Susan Sell called the international treaty a TRIPS Double-Plus Agreement, because its obligations far exceed those of the World Trade Organization's TRIPS Agreement 1994, and TRIPS-Plus Agreements, such as the Australia-United States Free Trade Agreement 2004. ACTA lacks the language of other international intellectual property agreements, which emphasise the need to balance the protection of intellectual property owners with the wider public interest in access to medicines, human development, and transfer of knowledge and technology. In Australia, there was much controversy both about the form and the substance of ACTA. While the Department of Foreign Affairs and Trade was a partisan supporter of the agreement, a wide range of stakeholders were openly critical. After holding hearings and taking note of the position of the European Parliament and the controversy in the United States, the Joint Standing Committee on Treaties in the Australian Parliament recommended the deferral of ratification of ACTA. This was striking as representatives of all the main parties agreed on the recommendation. The committee was concerned about the lack of transparency, due process, public participation, and substantive analysis of the treaty. There were also reservations about the ambiguity of the treaty text, and its potential implications for the digital economy, innovation and competition, plain packaging of tobacco products, and access to essential medicines. The treaty has provoked much soul-searching as to whether the Trick or Treaty reforms on the international treaty-making process in Australia have been compromised or undermined. Although ACTA stalled in the Australian Parliament, the debate over it is yet to conclude. There have been concerns in Australia and elsewhere that ACTA will be revived as a ‘zombie agreement’. Indeed, in March 2013, the Canadian government introduced a bill to ensure compliance with ACTA. Will it be also resurrected in Australia? Has it already been revived? There are three possibilities. First, the Australian government passed enhanced remedies with respect to piracy, counterfeiting and border measures in a separate piece of legislation – the Intellectual Property Laws Amendment (Raising the Bar) Act 2012 (Cth). Second, the Department of Foreign Affairs and Trade remains supportive of ACTA. It is possible, after further analysis, that the next Australian Parliament – to be elected in September 2013 – will ratify the treaty. Third, Australia is involved in the Trans-Pacific Partnership negotiations. The government has argued that ACTA should be a template for the Intellectual Property Chapter in the Trans-Pacific Partnership. The United States Trade Representative would prefer a regime even stronger than ACTA. This chapter provides a portrait of the Australian debate over ACTA. It is the account of an interested participant in the policy proceedings. This chapter will first consider the deliberations and recommendations of the Joint Standing Committee on Treaties on ACTA. Second, there was a concern that ACTA had failed to provide appropriate safeguards with respect to civil liberties, human rights, consumer protection and privacy laws. Third, there was a concern about the lack of balance in the treaty’s copyright measures; the definition of piracy is overbroad; the suite of civil remedies, criminal offences and border measures is excessive; and there is a lack of suitable protection for copyright exceptions, limitations and remedies. Fourth, there was a worry that the provisions on trademark law, intermediary liability and counterfeiting could have an adverse impact upon consumer interests, competition policy and innovation in the digital economy. Fifth, there was significant debate about the impact of ACTA on pharmaceutical drugs, access to essential medicines and health-care. Sixth, there was concern over the lobbying by tobacco industries for ACTA – particularly given Australia’s leadership on tobacco control and the plain packaging of tobacco products. Seventh, there were concerns about the operation of border measures in ACTA. Eighth, the Joint Standing Committee on Treaties was concerned about the jurisdiction of the ACTA Committee, and the treaty’s protean nature. Finally, the chapter raises fundamental issues about the relationship between the executive and the Australian Parliament with respect to treaty-making. There is a need to reconsider the efficacy of the Trick or Treaty reforms passed by the Australian Parliament in the 1990s.
Resumo:
PURPOSE To estimate refractive indices used by the Lenstar biometer to translate measured optical path lengths into geometrical path lengths within the eye. METHODS Axial lengths of model eyes were determined using the IOLMaster and Lenstar biometers; comparing those lengths gave an overall eye refractive index estimate for the Lenstar. Using the Lenstar Graphical User Interface, we noticed that boundaries between media could be manipulated and opposite changes in optical path lengths on either side of the boundary could be introduced. Those ratios were combined with the overall eye refractive index to estimate separate refractive indices. Furthermore, Haag-Streit provided us with a template to obtain 'air thicknesses' to compare with geometrical distances. RESULTS The axial length estimates obtained using the IOLMaster and the Lenstar agreed to within 0.01 mm. Estimates of group refractive indices used in the Lenstar were 1.340, 1.341, 1.415, and 1.354 for cornea, aqueous, lens, and overall eye, respectively. Those refractive indices did not match those of schematic eyes, but were close in the cases of aqueous and lens. Linear equations relating air thicknesses to geometrical thicknesses were consistent with our findings. CONCLUSION The Lenstar uses different refractive indices for different ocular media. Some of the refractive indices, such as that for the cornea, are not physiological; therefore, it is likely that the calibrations in the instrument correspond to instrument-specific corrections and are not the real optical path lengths.
Resumo:
Melt electrospinning and its additive manufacturing analogue, melt electrospinning writing (MEW), are two processes which can produce porous materials for applications where solvent toxicity and accumulation in solution electrospinning are problematic. This study explores the melt electrospinning of poly(ε-caprolactone) (PCL) scaffolds, specifically for applications in tissue engineering. The research described here aims to inform researchers interested in melt electrospinning about technical aspects of the process. This includes rapid fiber characterization using glass microscope slides, allowing influential processing parameters on fiber morphology to be assessed, as well as observed fiber collection phenomena on different collector substrates. The distribution and alignment of melt electrospun PCL fibers can be controlled to a certain degree using patterned collectors to create large numbers of scaffolds with shaped macroporous architectures. However, the buildup of residual charge in the collected fibers limits the achievable thickness of the porous template through such scaffolds. One challenge identified for MEW is the ability to control charge buildup so that fibers can be placed accurately in close proximity, and in many centimeter heights. The scale and size of scaffolds produced using MEW, however, indicate that this emerging process will fill a technological niche in biofabrication.
Resumo:
Control over nucleation and growth of multi-walled carbon nanotubes in the nanochannels of porous alumina membranes by several combinations of posttreatments, namely exposing the membrane top surface to atmospheric plasma jet and application of standard S1813 photoresist as an additional carbon precursor, is demonstrated. The nanotubes grown after plasma treatment nucleated inside the channels and did not form fibrous mats on the surface. Thus, the nanotube growth mode can be controlled by surface treatment and application of additional precursor, and complex nanotube-based structures can be produced for various applications. A plausible mechanism of nanotube nucleation and growth in the channels is proposed, based on the estimated depth of ion flux penetration into the channels.
Resumo:
A new method for fabricating hydrogels with intricate control over hierarchical 3D porosity using micro-fiber porogens is presented. Melt electrospinning writing of poly(ε-caprolactone) is used to create the sacrificial template leading to hierarchical structuring consisting of pores inside the denser poly(2-oxazoline) hydrogel mesh. This versatile approach provides new opportunities to create well-defined multilevel control over interconnected pores with diameters in the lower micrometer range inside hydrogels with potential applications as cell scaffolds with tunable diffusion and transport of, e.g. nutrients, growth factors or therapeutics.
Resumo:
Estimating the economic burden of injuries is important for setting priorities, allocating scarce health resources and planning cost-effective prevention activities. As a metric of burden, costs account for multiple injury consequences—death, severity, disability, body region, nature of injury—in a single unit of measurement. In a 1989 landmark report to the US Congress, Rice et al1 estimated the lifetime costs of injuries in the USA in 1985. By 2000, the epidemiology and burden of injuries had changed enough that the US Congress mandated an update, resulting in a book on the incidence and economic burden of injury in the USA.2 To make these findings more accessible to the larger realm of scientists and practitioners and to provide a template for conducting the same economic burden analyses in other countries and settings, a summary3 was published in Injury Prevention. Corso et al reported that, between 1985 and 2000, injury rates declined roughly 15%. The estimated lifetime cost of these injuries declined 20%, totalling US$406 billion, including US$80 billion in medical costs and US$326 billion in lost productivity. While incidence reflects problem size, the relative burden of injury is better expressed using costs.