975 resultados para EVEN-EVEN NYLONS
Resumo:
Strike-slip faults commonly display structurally complex areas of positive or negative topography. Understanding the development of such areas has important implications for earthquake studies and hydrocarbon exploration. Previous workers identified the key factors controlling the occurrence of both topographic modes and the related structural styles. Kinematic and stress boundary conditions are of first-order relevance. Surface mass transport and material properties affect fault network structure. Experiments demonstrate that dilatancy can generate positive topography even under simple-shear boundary conditions. Here, we use physical models with sand to show that the degree of compaction of the deformed rocks alone can determine the type of topography and related surface fault network structure in simple-shear settings. In our experiments, volume changes of ∼5% are sufficient to generate localized uplift or subsidence. We discuss scalability of model volume changes and fault network structure and show that our model fault zones satisfy geometrical similarity with natural flower structures. Our results imply that compaction may be an important factor in the development of topography and fault network structure along strike-slip faults in sedimentary basins.
Resumo:
Post–disaster reconstruction projects are often considered ineffectual or unproductive because on many occasions in the past they have performed extremely poorly during post-contract occupation, or have failed altogether to deliver acceptable outcomes. In some cases, these projects have already failed even before their completion, leading many sponsor aid organisations to hold these projects up as examples of how not to deliver housing reconstruction. Research into some previous unsuccessful projects has revealed that often the lack of adequate knowledge regarding the context and complexity involved in the implementation of these projects is generally responsible for their failure. Post-disaster reconstruction projects are certainly very complex in nature, often very context-specific and they can vary widely in magnitude. Despite such complexity, reconstruction projects can still have a high likelihood of success if adequate consideration is given to the importance of factors which are known to positively influence reconstruction efforts. Good outcomes can be achieved when planners and practitioners ensure best practices are embedded in the design of reconstruction projects at the time reconstruction projects they are first instigated. This paper outlines and discusses factors that significantly contribute to the successful delivery of post-disaster housing reconstruction projects.
Resumo:
Phylogenetic inference from sequences can be misled by both sampling (stochastic) error and systematic error (nonhistorical signals where reality differs from our simplified models). A recent study of eight yeast species using 106 concatenated genes from complete genomes showed that even small internal edges of a tree received 100% bootstrap support. This effective negation of stochastic error from large data sets is important, but longer sequences exacerbate the potential for biases (systematic error) to be positively misleading. Indeed, when we analyzed the same data set using minimum evolution optimality criteria, an alternative tree received 100% bootstrap support. We identified a compositional bias as responsible for this inconsistency and showed that it is reduced effectively by coding the nucleotides as purines and pyrimidines (RY-coding), reinforcing the original tree. Thus, a comprehensive exploration of potential systematic biases is still required, even though genome-scale data sets greatly reduce sampling error.
Resumo:
This is a fine collection of papers, from some leading educational scholars. They argue that the contemporary corporatised policies of education such as international education limit the possibilities of transformative practice. They demonstrate how the local (the national) and the global (the imperial) are interconnected phenomena, acting upon one another to construct indigeneity and racialised identities, and even hybridation, in ways that engender inequalities, restrict human rights, and infridge on the democratic and civil rights of the colonised and the marginalised. At the same time, they point to the possibilities of resistance, conditions that provide pedagogic opportunities for the creation of counter-hegemonic ideas, expressions, practices and structures. This book is highly recommended.Fazal RizviProfessor in Educational Policy Studies,University of Illinois, Urbana- Champaign, USA
Resumo:
This thesis examines the social practice of homework. It explores how homework is shaped by the discourses, policies and guidelines in circulation in a society at any given time with particular reference to one school district in the province of Newfoundland and Labrador, Canada. This study investigates how contemporary homework reconstitutes the home as a pedagogical site where the power of the institution of schooling circulates regularly from school to home. It examines how the educational system shapes the organization of family life and how family experiences with homework may be different in different sites depending on the accessibility of various forms of cultural capital. This study employs a qualitative approach, incorporating multiple case studies, and is complemented by insights from institutional ethnography and critical discourse analysis. It draws on the theoretical concepts of Foucault including power and power relations, and governmentality and surveillance, as well as Bourdieu’s concepts of economic, social and cultural capital for analysis. It employs concepts from Bourdieu’s work as they have been expanded on by researchers including Reay (1998), Lareau (2000), and Griffith and Smith (2005). The studies of these researchers allowed for an examination of homework as it related to families and mothers’ work. Smith’s (1987; 1999) concepts of ruling relations, mothers’ unpaid labour, and the engine of inequality were also employed in the analysis. Family interviews with ten volunteer families, teacher focus group sessions with 15 teachers from six schools, homework artefacts, school newsletters, homework brochures, and publicly available assessment and evaluation policy documents from one school district were analyzed. From this analysis key themes emerged and the findings are documented throughout five data analysis chapters. This study shows a change in education in response to a system shaped by standards, accountability and testing. It documents an increased transference of educational responsibility from one educational stakeholder to another. This transference of responsibility shifts downward until it eventually reaches the family in the form of homework and educational activities. Texts in the form of brochures and newsletters, sent home from school, make available to parents specific subject positions that act as instruments of normalization. These subject positions promote a particular ‘ideal’ family that has access to certain types of cultural capital needed to meet the school’s expectations. However, the study shows that these resources are not equally available to all and some families struggle to obtain what is necessary to complete educational activities in the home. The increase in transference of educational work from the school to the home results in greater work for parents, particularly mothers. As well, consideration is given to mother’s role in homework and how, in turn, classroom instructional practices are sometimes dependent on the work completed at home with differential effects for children. This study confirms previous findings that it is mothers who assume the greatest role in the educational trajectory of their children. An important finding in this research is that it is not only middle-class mothers who dedicate extensive time working hard to ensure their children’s educational success; working-class mothers also make substantial contributions of time and resources to their children’s education. The assignments and educational activities distributed as homework require parents’ knowledge of technical school pedagogy to help their children. Much of the homework being sent home from schools is in the area of literacy, particularly reading, but requires parents to do more than read with children. A key finding is that the practices of parents are changing and being reconfigured by the expectations of schools in regard to reading. Parents are now being required to monitor and supervise children’s reading, as well as help children complete reading logs, written reading responses, and follow up questions. The reality of family life as discussed by the participants in this study does not match the ‘ideal’ as portrayed in the educational documents. Homework sessions often create frustrations and tensions between parents and children. Some of the greatest struggles for families were created by mathematical homework, homework for those enrolled in the French Immersion program, and the work required to complete Literature, Heritage and Science Fair projects. Even when institutionalized and objectified capital was readily available, many families still encountered struggles when trying to carry out the assigned educational tasks. This thesis argues that homework and education-related activities play out differently in different homes. Consideration of this significance may assist educators to better understand and appreciate the vast difference in families and the ways in which each family can contribute to their children’s educational trajectory.
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
Numerous crops grow in sugar regions that have the potential to increase the amount of biomass available to a small bagasse-based pulp factory. Arundo donax and Sorghum offer unique advantages to farmers compared to other agricultural crops. Sorghum bicolour requires only 1/3 of the water of sugarcane. Arundo donax is a very high yield crop, it can also grow with little water but it has the further advantage in that it is also highly stress tolerant, making it suitable for land which is unsuited to other crops. Pulps produced from these crops were benchmarked against sugarcane bagasse pulp. Arundo, sorghum and bagasse were pulped using KOH and anthraquinone to 20 Kappa number so as to produce a bleachable pulp. The unbleached sorghum pulp has better tensile strength properties than the unbleached Arundo pulp (43.8 Nm/g compared to 21.4 Nm/g) and the bleached sorghum pulp tensile strength was similar to bagasse (28.4 Nm/g). At 20 Kappa number, sorghum pulp had acceptable yield for a non-wood fibre (45% c.f. 55% for bagasse), Arundo donax pulp had low tensile strength, and relatively low yield (38.7%), even for an agricultural fibre and required severe cooking conditions to achieve similar delignification to sugarcane bagasse or sorghum. Sorghum and Arundo donax produced thicker handsheets than bagasse (>160 μm c.f. 122 μm for bagasse). In preliminary experiments sorghum and bagasse responded slightly better to Totally Chlorine Free bleaching (QPP), although none achieved a satisfactory brightness level and more optimisation is needed.
Resumo:
The building sector is the dominant consumer of energy and therefore a major contributor to anthropomorphic climate change. The rapid generation of photorealistic, 3D environment models with incorporated surface temperature data has the potential to improve thermographic monitoring of building energy efficiency. In pursuit of this goal, we propose a system which combines a range sensor with a thermal-infrared camera. Our proposed system can generate dense 3D models of environments with both appearance and temperature information, and is the first such system to be developed using a low-cost RGB-D camera. The proposed pipeline processes depth maps successively, forming an ongoing pose estimate of the depth camera and optimizing a voxel occupancy map. Voxels are assigned 4 channels representing estimates of their true RGB and thermal-infrared intensity values. Poses corresponding to each RGB and thermal-infrared image are estimated through a combination of timestamp-based interpolation and a pre-determined knowledge of the extrinsic calibration of the system. Raycasting is then used to color the voxels to represent both visual appearance using RGB, and an estimate of the surface temperature. The output of the system is a dense 3D model which can simultaneously represent both RGB and thermal-infrared data using one of two alternative representation schemes. Experimental results demonstrate that the system is capable of accurately mapping difficult environments, even in complete darkness.
Resumo:
In this paper we demonstrate passive vision-based localization in environments more than two orders of magnitude darker than the current benchmark using a 100 webcam and a 500 camera. Our approach uses the camera’s maximum exposure duration and sensor gain to achieve appropriately exposed images even in unlit night-time environments, albeit with extreme levels of motion blur. Using the SeqSLAM algorithm, we first evaluate the effect of variable motion blur caused by simulated exposures of 132 ms to 10000 ms duration on localization performance. We then use actual long exposure camera datasets to demonstrate day-night localization in two different environments. Finally we perform a statistical analysis that compares the baseline performance of matching unprocessed greyscale images to using patch normalization and local neighbourhood normalization – the two key SeqSLAM components. Our results and analysis show for the first time why the SeqSLAM algorithm is effective, and demonstrate the potential for cheap camera-based localization systems that function across extreme perceptual change.
Resumo:
Property in an elusive concept. In many respects it has been regarded as a source of authority to use, develop and make decisions about whatever is the subject matter of this right of ownership. This is true whether the holder of this right of ownership is a private entity or a public entity. Increasingly a right of ownership of this kind has been recognised not only as a source of authority but also as a mechanism for restricting or limiting and perhaps even prohibiting existing or proposed activities that impact upon the environment. It is increasingly therefore an instrument of regulation as much as an instrument of authorisation. The protection and conservation of the environment are ultimately a matter of the public interest. This is not to suggest that the individual holders of rights of ownership are not interested in protecting the environment. It is open to them to do so in the exercise of a right of ownership as a source of authorisation. However a right of ownership – whether private or public – has become increasingly the mechanism according to which the environment is protected and conserved through the use of rights of ownership as a means of regulation. This paper addressed these issues from a doctrinal as well as a practical perspective in how the environment is managed.
Resumo:
The function of environmental governance and the principle of the rule of law are both controversial and challenging. To apply the principle of the rule of law to the function of environmental governance is perhaps even more controversial and challenging. A system of environmental governance seeks to bring together the range of competitive and potentially conflicting interests in how the environment and its resources are managed. Increasingly it is the need for economic, social and ecological sustainability that brings these interests – both public and private – together. Then there is the relevance of the principle of the rule of law. Economic, social and ecological sustainability will be achieved – if at all – by a complex series of rules of law that are capable of enforcement so as to ensure compliance with them. To what extent do these rules of law reflect the principle of the rule of law? Is the principle of the rule of law the formally unstated value that is expected to underpin the legal system or is it the normative predicate that directs the legal system both vertically and horizontally? Is sustainability an aspirational value or a normative predicate according to which the environment and its resources are managed? Let us deal sequentially with these issues by reviewing a number of examples that demonstrate the relationship between environmental governance and the rule of law.
Resumo:
Predicate encryption (PE) is a new primitive which supports exible control over access to encrypted data. In PE schemes, users' decryption keys are associated with predicates f and ciphertexts encode attributes a that are specified during the encryption procedure. A user can successfully decrypt if and only if f(a) = 1. In this thesis, we will investigate several properties that are crucial to PE. We focus on expressiveness of PE, Revocable PE and Hierarchical PE (HPE) with forward security. For all proposed systems, we provide a security model and analysis using the widely accepted computational complexity approach. Our first contribution is to explore the expressiveness of PE. Existing PE supports a wide class of predicates such as conjunctions of equality, comparison and subset queries, disjunctions of equality queries, and more generally, arbitrary combinations of conjunctive and disjunctive equality queries. We advance PE to evaluate more expressive predicates, e.g., disjunctive comparison or disjunctive subset queries. Such expressiveness is achieved at the cost of computational and space overhead. To improve the performance, we appropriately revise the PE to reduce the computational and space cost. Furthermore, we propose a heuristic method to reduce disjunctions in the predicates. Our schemes are proved in the standard model. We then introduce the concept of Revocable Predicate Encryption (RPE), which extends the previous PE setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created. We propose two RPE schemes. Our first scheme, termed Attribute- Hiding RPE (AH-RPE), offers attribute-hiding, which is the standard PE property. Our second scheme, termed Full-Hiding RPE (FH-RPE), offers even stronger privacy guarantees, i.e., apart from possessing the Attribute-Hiding property, the scheme also ensures that no information about revoked users is leaked from a given ciphertext. The proposed schemes are also proved to be secure under well established assumptions in the standard model. Secrecy of decryption keys is an important pre-requisite for security of (H)PE and compromised private keys must be immediately replaced. The notion of Forward Security (FS) reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. We present the first Forward-Secure Hierarchical Predicate Encryption (FS-HPE) that is proved secure in the standard model. Our FS-HPE scheme offers some desirable properties: time-independent delegation of predicates (to support dynamic behavior for delegation of decrypting rights to new users), local update for users' private keys (i.e., no master authority needs to be contacted), forward security, and the scheme's encryption process does not require knowledge of predicates at any level including when those predicates join the hierarchy.
Resumo:
Article 2(2) of the Kyoto Protocol imposes an obligation only on certain developed countries, working through the International Maritime Organisation (IMO), to pursue the reduction of greenhouse gas (GHG) emissions from marine bunker fuels. The IMO recently took the initiative to adopt a new legal instrument for the reduction of shipgenerated greenhouse gas emissions. Some developing countries have suggested that the proposed IMO initiative should strictly adhere to Article 2(2) of the Kyoto Protocol and the principle of Common but Differentiated Responsibility (CBDR). Against this backdrop, this article intends to review the extent to which it is possible to propose an international legal instrument for the reduction of GHG emissions from marine bunker fuels which is applicable only to ships from developed countries considering the complex characteristics of the international shipping industry. This article also examines how far this approach is justifiable even within the framework of the CBDR principle.
Resumo:
Objective: To evaluate the prescribing practices of Australian dispensing doctors (DDs) and to explore their interpretations of the findings. Design, participants and setting: Sequential explanatory mixed methods. The quantitative phase comprised analysis of Pharmaceutical Benefits Scheme (PBS) claims data of DDs and non-DDs, 1 July 2005 30 June 2007. The qualitative phase involved semi-structured interviews with DDs in rural and remote general practice across Australian states, August 2009 February 2010. Main outcome measures: The number of PBS prescriptions per 1000 patients and use of Regulation 24 of the National Health (Pharmaceutical Benefits) Regulations 1960 (r. 24); DDs' interpretation of the findings. Results: 72 DDs' and 1080 non-DDs' PBS claims data were analysed quantitatively. DDs issued fewer prescriptions per 1000 patients (9452 v 15057; P = 0.003), even with a similar proportion of concessional patients and patients aged >65 years in their populations. DDs issued significantly more r. 24 prescriptions per 1000 prescriptions than non-DDs (314 v 67; P=0.008). Interviews with 22 DDs explained that the fewer prescriptions were due to perceived expectation from their peers regarding prescribing norms and the need to generate less administrative paperwork in small practices. Conclusions: Contrary to overseas findings, we found no evidence that Australian DDs overprescribed because of their additional dispensing role. MJA 2011; 195: 172-175
Resumo:
LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.