874 resultados para minima of forms
Resumo:
Chemical control of surface functionality and topography is an essential requirement for many technological purposes. In particular, the covalent attachment of monomeric proteins to surfaces has been the object of intense studies in recent years, for applications as varied as electrochemistry, immuno-sensing, and the production of biocompatible coatings. Little is known, however, about the characteristics and requirements underlying surface attachment of supramolecular protein nanostructures. Amyloid fibrils formed by the self-assembly of peptide and protein molecules represent one important class of such structures. These highly organized beta-sheet-rich assemblies are a hallmark of a range of neurodegenerative disorders, including Alzheimer's disease and type II diabetes, but recent findings suggest that they have much broader significance, potentially representing the global free energy minima of the energy landscapes of proteins and having potential applications in material science. In this paper, we describe strategies for attaching amyloid fibrils formed from different proteins to gold surfaces under different solution conditions. Our methods involve the reaction of sulfur containing small molecules (cystamine and 2-iminothiolane) with the amyloid fibrils, enabling their covalent linkage to gold surfaces. We demonstrate that irreversible attachment using these approaches makes possible quantitative analysis of experiments using biosensor techniques, such as quartz crystal microbalance (QCM) assays that are revolutionizing our understanding of the mechanisms of amyloid growth and the factors that determine its kinetic behavior. Moreover, our results shed light on the nature and relative importance of covalent versus noncovalent forces acting on protein superstructures at metal surfaces.
Resumo:
[EN] The objective of this study was to determine whether a short training program, using real foods, would decreased their portion-size estimation errors after training. 90 student volunteers (20.18±0.44 y old) of the University of the Basque Country (Spain) were trained in observational techniques and tested in food-weight estimation during and after a 3-hour training period. The program included 57 commonly consumed foods that represent a variety of forms (125 different shapes). Estimates of food weight were compared with actual weights. Effectiveness of training was determined by examining change in the absolute percentage error for all observers and over all foods over time. Data were analyzed using SPSS vs. 13.0. The portion-size errors decreased after training for most of the foods. Additionally, the accuracy of their estimates clearly varies by food group and forms. Amorphous was the food type estimated least accurately both before and after training. Our findings suggest that future dietitians can be trained to estimate quantities by direct observation across a wide range of foods. However this training may have been too brief for participants to fully assimilate the application.
Resumo:
We present the Unified Form Language (UFL), which is a domain-specific language for representing weak formulations of partial differential equations with a view to numerical approximation. Features of UFL include support for variational forms and functionals, automatic differentiation of forms and expressions, arbitrary function space hierarchies formultifield problems, general differential operators and flexible tensor algebra. With these features, UFL has been used to effortlessly express finite element methods for complex systems of partial differential equations in near-mathematical notation, resulting in compact, intuitive and readable programs. We present in this work the language and its construction. An implementation of UFL is freely available as an open-source software library. The library generates abstract syntax tree representations of variational problems, which are used by other software libraries to generate concrete low-level implementations. Some application examples are presented and libraries that support UFL are highlighted. © 2014 ACM.
Resumo:
The distribution of dissolved, soluble and colloidal fractions of Al and Ti was assessed by ultrafiltration studies in the upper water column of the eastern tropical North Atlantic. The dissolved fractions of both metals were found to be dominated by the soluble phase smaller than 10 kDa. The colloidal associations were very low (0.2–3.4%) for Al and not detectable for Ti. These findings are in some contrast to previous estimations for Ti and to the predominant occurrence of both metals as hydrolyzed species in seawater. However, low tendencies to form inorganic colloids can be expected, as in seawater dissolved Al and dissolved Ti are present within their inorganic solubility levels. In addition, association with functional organic groups in the colloidal phase is unlikely for both metals. Vertical distributions of the dissolved fractions showed surface maxima with up to 43 nM of Al and 157 pM of Ti, reflecting their predominant supply from atmospheric sources to the open ocean. In the surface waters, excess dissolved Al over dissolved Ti was present compared to the crustal source, indicating higher solubility and thus elevated inputs of dissolved Al from atmospheric mineral particles. At most stations, subsurface minima of Al and Ti were observed and can be ascribed to scavenging processes and/or biological uptake. The dissolved Al concentrations decreased by 80–90% from the surface maximum to the subsurface minimum. Estimated residence times in the upper 100 m of the water column ranged between 1.6 and 4 years for dissolved Al and between 14 and 17 years for dissolved Ti. The short residence times are in some contrast to the low colloidal associations of Al and Ti and the assumed role of colloids as intermediates in scavenging processes. This suggests that either the removal of both metals occurs predominantly via direct transfer of the hydrolyzed species into the particulate fraction or that the colloidal phase is rapidly turned over in the upper water column.
Resumo:
Bills of rights are currently a much debated topic in various jurisdictions throughout the world. Almost all democratic nations, with the exception of Australia, now have a bill of rights. These take a variety of forms, ranging from constitutionally entrenched bills of rights, such as those of the United States and South Africa, to non-binding statements of rights. Falling between these approaches are non-entrenched, statutory bills of rights. As regards the latter, a model which has become increasingly popular is that of bills of rights based on interpretative obligations, whereby duties are placed upon courts to interpret national legislation in accordance with human rights standards. The aim of this book is to provide a comparative analysis of the bills of rights of a number of jurisdictions which have chosen to adopt such an approach. The jurisdictions considered are New Zealand, the United Kingdom, the Australian Capital Territory and the Australian state of Victoria.
There have been very few books published to date which contain a detailed comparative analysis of the bills of rights which this book addresses. The book adopts a unique thematic approach, whereby six aspects of the bills of rights in question have been selected for comparative analysis and a chapter is allocated to each aspect. This approach serves to facilitate the comparative discussion and emphasise the centrality of the comparative methodology.
Resumo:
The longstanding emphasis on the neighbourhood as a scale for intervention and action has given rise to a variety of forms of governance with a number of different rationales. The predominant rationales about the purpose of neighbourhood governance are encapsulated in a fourfold typology developed by Lowndes and Sullivan (2008). This article sets out to test this approach by drawing on an evaluation of neighbourhood initiatives in the City of Westminster which were delivered through a third sector organisation, the Paddington Development Trust. ‘Insider’ perspectives gathered at city and neighbourhood levels regarding the infrastructure for neighbourhood management are discussed and evaluated in the light of these rationales. The conclusions, while broadly reflecting Lowndes and Sullivan and a follow-up study of Manchester, suggest that in Westminster the civic and economic rationales tend to predominate. However, the Westminster approach is contingent on the prevailing ethos and funding regimes at central and local levels and remains relatively detached from mainstream services. While community empowerment is an important part of the policy rhetoric, it is argued that in practice a ‘strategy of containment’ operates whereby residents in the neighbourhoods have relatively little control over targets and resources and that new governance mechanisms can be relatively easily de-coupled when required. In retrospect, co-production might have been a more effective model for neighbourhood governance, not least given its fit with policy direction.
Resumo:
Connell’s concept of hegemonic masculinity is often reduced to a singular construct, consisting of “toxic” traits viewed as detrimental to well-being. However, the concept allows for variation in hegemony, including the possibility of forms more conducive to well-being. Through in-depth interviews with thirty male meditators in the United Kingdom, we explored the social dimensions of meditation practice to examine its potential implications for well-being. Most participants became involved with “communities of practice” centered on meditation that promoted new local hegemonies, and these included ideals experienced as conducive to well-being, like abstinence. However, social processes associated with hegemony, like hierarchy and marginalization, were not overturned. Moreover, participants faced challenges enacting new practices in relation to the broader system of hegemonic masculinity—outside these communities—reporting censure. Our findings are cautionary for professionals seeking to encourage well-being behaviors: that is, there is potential for adaptation in men, yet complex social processes influence this change.
Resumo:
This thesis explores the debate and issues regarding the status of visual ;,iferellces in the optical writings of Rene Descartes, George Berkeley and James 1. Gibson. It gathers arguments from across their works and synthesizes an account of visual depthperception that accurately reflects the larger, metaphysical implications of their philosophical theories. Chapters 1 and 2 address the Cartesian and Berkelean theories of depth-perception, respectively. For Descartes and Berkeley the debate can be put in the following way: How is it possible that we experience objects as appearing outside of us, at various distances, if objects appear inside of us, in the representations of the individual's mind? Thus, the Descartes-Berkeley component of the debate takes place exclusively within a representationalist setting. Representational theories of depthperception are rooted in the scientific discovery that objects project a merely twodimensional patchwork of forms on the retina. I call this the "flat image" problem. This poses the problem of depth in terms of a difference between two- and three-dimensional orders (i.e., a gap to be bridged by one inferential procedure or another). Chapter 3 addresses Gibson's ecological response to the debate. Gibson argues that the perceiver cannot be flattened out into a passive, two-dimensional sensory surface. Perception is possible precisely because the body and the environment already have depth. Accordingly, the problem cannot be reduced to a gap between two- and threedimensional givens, a gap crossed with a projective geometry. The crucial difference is not one of a dimensional degree. Chapter 3 explores this theme and attempts to excavate the empirical and philosophical suppositions that lead Descartes and Berkeley to their respective theories of indirect perception. Gibson argues that the notion of visual inference, which is necessary to substantiate representational theories of indirect perception, is highly problematic. To elucidate this point, the thesis steps into the representationalist tradition, in order to show that problems that arise within it demand a tum toward Gibson's information-based doctrine of ecological specificity (which is to say, the theory of direct perception). Chapter 3 concludes with a careful examination of Gibsonian affordallces as the sole objects of direct perceptual experience. The final section provides an account of affordances that locates the moving, perceiving body at the heart of the experience of depth; an experience which emerges in the dynamical structures that cross the body and the world.
Resumo:
Transverse, subglacial bedforms (ribbed moraines) occur frequently in southern Keewatin, Nunavut, Canada, where they record a complex glacial history, including shifting centers of ice dispersal and fluctuating basal thermal regimes. Comprehensive mapping and quantitative morphometric analysis of the subglacial bedform archive in this sector reveals that ribbed moraines are spatially clustered by size and assume a broad range of visually distinct forms. Results suggest that end-member morphologies are consistent with a dichotomous polygenetic origin, and that a continuum of forms emerged through subsequent reshaping processes of variable intensity and duration. Translocation of mobile, immobile and quasi-mobile beds throughout the last glacial cycle conditioned the development of a subglacial deforming bed mosaic, and is likely responsible for the patchy zonation of palimpsest and inherited landscape signatures within this former core region of the Laurentide Ice Sheet. Comparison against field evidence collected from central Norway suggests that bedforming processes can be locally mediated by pre-existing topography.
Resumo:
A set of forms which was held together by a string. The title page is brittle and crumbling. Each page is dated and signed either by the Colonel or Lieutenant Colonel A.A.G. [Assistand Adjutant General] Niagara.
Resumo:
Ce mémoire comprend deux volets : une étude théorique et un texte de création littéraire. Dans un premier temps, il s’agir d’étudier le rôle du désir dans la démarche thématique et philosophique employée par l’écrivain Wilson Harris dans son roman The Palace of the Peacock. Ainsi démonterons-nous dans le premier chapitre que Harris se sert – de façon paradoxale – du désir empirique pour faire valoir les limites mêmes de celui-ci. Nous aborderons dans le deuxième chapitre le rapport problématique qu’entretient, chez Harris, la subjectivité féminine avec la subjectivité masculine. En particulier, nous examinerons la représentation de ce rapport sous la forme de métaphores ayant trait à l’environnement et à l’anatomie. Nous avancerons que le caractère problématique que revêt le rapport entre subjectivités féminine et masculine dans le roman est en quelque sorte nécessitée par l’écriture même de Harris. Dans le troisième chapitre, nous prendrons part aux débats sur la poétique qui animent la littérature contemporaine afin de situer notre propre élan vers la création littéraire. En même temps, nous entreprendrons une tentative de récupération de certains des concepts théoriques formulés par Harris, en lien avec notre propre poétique. S’ensuivra notre projet de création littéraire, intitulé HEROISM/EULOGIES, qui constitue le quatrième et dernier chapitre du mémoire. Ce texte, extrait d’un projet d’écriture créative plus vaste, trace les mouvements d’un certain nombre de sujets à travers une Amérique imaginée.
Resumo:
In this computerized, globalised and internet world our computer collects various types of information’s about every human being and stores them in files secreted deep on its hard drive. Files like cache, browser history and other temporary Internet files can be used to store sensitive information like logins and passwords, names addresses, and even credit card numbers. Now, a hacker can get at this information by wrong means and share with someone else or can install some nasty software on your computer that will extract your sensitive and secret information. Identity Theft posses a very serious problem to everyone today. If you have a driver’s license, a bank account, a computer, ration card number, PAN card number, ATM card or simply a social security number you are more than at risk, you are a target. Whether you are new to the idea of ID Theft, or you have some unanswered questions, we’ve compiled a quick refresher list below that should bring you up to speed. Identity theft is a term used to refer to fraud that involves pretending to be someone else in order to steal money or get other benefits. Identity theft is a serious crime, which is increasing at tremendous rate all over the world after the Internet evolution. There is widespread agreement that identity theft causes financial damage to consumers, lending institutions, retail establishments, and the economy as a whole. Surprisingly, there is little good public information available about the scope of the crime and the actual damages it inflicts. Accounts of identity theft in recent mass media and in film or literature have centered on the exploits of 'hackers' - variously lauded or reviled - who are depicted as cleverly subverting corporate firewalls or other data protection defenses to gain unauthorized access to credit card details, personnel records and other information. Reality is more complicated, with electronic identity fraud taking a range of forms. The impact of those forms is not necessarily quantifiable as a financial loss; it can involve intangible damage to reputation, time spent dealing with disinformation and exclusion from particular services because a stolen name has been used improperly. Overall we can consider electronic networks as an enabler for identity theft, with the thief for example gaining information online for action offline and the basis for theft or other injury online. As Fisher pointed out "These new forms of hightech identity and securities fraud pose serious risks to investors and brokerage firms across the globe," I am a victim of identity theft. Being a victim of identity theft I felt the need for creating an awareness among the computer and internet users particularly youngsters in India. Nearly 70 per cent of Indian‘s population are living in villages. Government of India already started providing computer and internet facilities even to the remote villages through various rural development and rural upliftment programmes. Highly educated people, established companies, world famous financial institutions are becoming victim of identity theft. The question here is how vulnerable the illiterate and innocent rural people are if they suddenly exposed to a new device through which some one can extract and exploit their personal data without their knowledge? In this research work an attempt has been made to bring out the real problems associated with Identity theft in developed countries from an economist point of view.
Resumo:
A statistical methodology is proposed and tested for the analysis of extreme values of atmospheric wave activity at mid-latitudes. The adopted methods are the classical block-maximum and peak over threshold, respectively based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD). Time-series of the ‘Wave Activity Index’ (WAI) and the ‘Baroclinic Activity Index’ (BAI) are computed from simulations of the General Circulation Model ECHAM4.6, which is run under perpetual January conditions. Both the GEV and the GPD analyses indicate that the extremes ofWAI and BAI areWeibull distributed, this corresponds to distributions with an upper bound. However, a remarkably large variability is found in the tails of such distributions; distinct simulations carried out under the same experimental setup provide sensibly different estimates of the 200-yr WAI return level. The consequences of this phenomenon in applications of the methodology to climate change studies are discussed. The atmospheric configurations characteristic of the maxima and minima of WAI and BAI are also examined.
Resumo:
Atmospheric CO2 concentration has varied from minima of 170-200 ppm in glacials to maxima of 280-300 ppm in the recent interglacials. Photosynthesis by C-3 plants is highly sensitive to CO2 concentration variations in this range. Physiological consequences of the CO2 changes should therefore be discernible in palaeodata. Several lines of evidence support this expectation. Reduced terrestrial carbon storage during glacials, indicated by the shift in stable isotope composition of dissolved inorganic carbon in the ocean, cannot be explained by climate or sea-level changes. It is however consistent with predictions of current process-based models that propagate known physiological CO2 effects into net primary production at the ecosystem scale. Restricted forest cover during glacial periods, indicated by pollen assemblages dominated by non-arboreal taxa, cannot be reproduced accurately by palaeoclimate models unless CO2 effects on C-3-C-4 plant competition are also modelled. It follows that methods to reconstruct climate from palaeodata should account for CO2 concentration changes. When they do so, they yield results more consistent with palaeoclimate models. In conclusion, the palaeorecord of the Late Quaternary, interpreted with the help of climate and ecosystem models, provides evidence that CO2 effects at the ecosystem scale are neither trivial nor transient.
Resumo:
An experimental search for crystalline forms of creatine including a variable temperature X-ray powder diffraction study has produced three polymorphs and a formic acid solvate. The crystal structures of creatine forms I and II were determined from X-ray powder diffraction data plus the creatine formic acid (1 : 1) solvate structure was obtained by single crystal X-ray diffraction methods. Evidence of a third polymorphic form of creatine obtained by rapid desolvation of creatine monohydrate is also presented. The results highlight the role of automated parallel crystallisation, slurry experiments and VT-XRPD as powerful techniques for effective physical form screening. They also highlight the importance of various complementary analytical techniques in structural characterisation and in achieving better understanding of the relationship between various solid-state forms. The structural relationships between various solid-state forms of creatine using the XPac method provided a rationale for the different relative stabilities of forms I and II of creatine with respect to the monohydrate form.