43 resultados para 030108 Separation Science


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of a simple method of coating a semi-permanent phospholipid layer onto a capillary for electrochromatography use was the focus of this study. The work involved finding good coating conditions, stabilizing the phospholipid coating, and examining the effect of adding divalent cations, cetyltrimethylammonium bromide, and polyethylene glycol (PEG)-lipids on the stability of the coating. Since a further purpose was to move toward more biological membrane coatings, the capillaries were also coated with cholesterol-containing liposomes and liposomes of red blood cell ghost lipids. Liposomes were prepared by extrusion, and large unilamellar vesicles with a diameter of about 100 nm were obtained. Zwitterionic phosphatidylcholine (PC) was used as a basic component, mainly 1-palmitoyl-2-oleyl-sn-glycero-3-phosphocholine (POPC) but also eggPC and 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC). Different amounts of sphingomyelin, bovine brain phosphatidylserine, and cholesterol were added to the PC. The stability of the coating in 40 mM N-(2-hydroxyethyl)piperazine-N’-(2-ethanesulfonic acid) (HEPES) solution at pH 7.4 was studied by measuring the electroosmotic flow and by separating neutral steroids, basic proteins, and low-molar-mass drugs. The presence of PC in the coating solution was found to be essential to achieving a coating. The stability of the coating was improved by the addition of negative phosphatidylserine, cholesterol, divalent cations, or PEGylated lipids, and by working in the gel-state region of the phospholipid. Study of the effect on the PC coating of divalent metal ions calcium, magnesium, and zinc showed a molar ratio of 1:3 PC/Ca2+ or PC/Mg2+ to give increased rigidity to the membrane and the best coating stability. The PEGylated lipids used in the study were sterically stabilized commercial lipids with covalently attached PEG chains. The vesicle size generally decreased when PEGylated lipids of higher molar mass were present in the vesicle. The predominance of discoidal micelles over liposomes increased PEG chain length and the average size of the vesicles thus decreased. In the capillary electrophoresis (CE) measurements a highly stable electroosmotic flow was achieved with 20% PEGylated lipid in the POPC coating dispersion, the best results being obtained for disteroyl PEG (3000) conjugates. The results suggest that smaller particles (discoidal micelles) result in tighter packing and better shielding of silanol groups on the silica wall. The effect of temperature on the coating stability was investigated by using DPPC liposomes at temperatures above (45 C) and below (25 C) the main phase transition temperature. Better results were obtained with DPPC in the more rigid gel state than in the fluid state: the electroosmotic flow was heavily suppressed and the PC coating was stabilized. Also dispersions of DPPC with 0−30 mol% of cholesterol and sphingomyelin in different ratios, which more closely resemble natural membranes, resulted in stable coatings. Finally, the CE measurements revealed that a stable coating is formed when capillaries are coated with liposomes of red blood cell ghost lipids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polymer protected gold nanoparticles have successfully been synthesized by both "grafting-from" and "grafting-to" techniques. The synthesis methods of the gold particles were systematically studied. Two chemically different homopolymers were used to protect gold particles: thermo-responsive poly(N-isopropylacrylamide), PNIPAM, and polystyrene, PS. Both polymers were synthesized by using a controlled/living radical polymerization process, reversible addition-fragmentation chain transfer (RAFT) polymerization, to obtain monodisperse polymers of various molar masses and carrying dithiobenzoate end groups. Hence, particles protected either with PNIPAM, PNIPAM-AuNPs, or with a mixture of two polymers, PNIPAM/PS-AuNPs (i.e., amphiphilic gold nanoparticles), were prepared. The particles contain monodisperse polymer shells, though the cores are somewhat polydisperse. Aqueous PNIPAM-AuNPs prepared using a "grafting-from" technique, show thermo-responsive properties derived from the tethered PNIPAM chains. For PNIPAM-AuNPs prepared using a "grafting-to" technique, two-phase transitions of PNIPAM were observed in the microcalorimetric studies of the aqueous solutions. The first transition with a sharp and narrow endothermic peak occurs at lower temperature, and the second one with a broader peak at higher temperature. In the first transition PNIPAM segments show much higher cooperativity than in the second one. The observations are tentatively rationalized by assuming that the PNIPAM brush can be subdivided into two zones, an inner and an outer one. In the inner zone, the PNIPAM segments are close to the gold surface, densely packed, less hydrated, and undergo the first transition. In the outer zone, on the other hand, the PNIPAM segments are looser and more hydrated, adopt a restricted random coil conformation, and show a phase transition, which is dependent on both particle concentration and the chemical nature of the end groups of the PNIPAM chains. Monolayers of the amphiphilic gold nanoparticles at the air-water interface show several characteristic regions upon compression in a Langmuir trough at room temperature. These can be attributed to the polymer conformational transitions from a pancake to a brush. Also, the compression isotherms show temperature dependence due to the thermo-responsive properties of the tethered PNIPAM chains. The films were successfully deposited on substrates by Langmuir-Blodgett technique. The sessile drop contact angle measurements conducted on both sides of the monolayer deposited at room temperature reveal two slightly different contact angles, that may indicate phase separation between the tethered PNIPAM and PS chains on the gold core. The optical properties of amphiphilic gold nanoparticles were studied both in situ at the air-water interface and on the deposited films. The in situ SPR band of the monolayer shows a blue shift with compression, while a red shift with the deposition cycle occurs in the deposited films. The blue shift is compression-induced and closely related to the conformational change of the tethered PNIPAM chains, which may cause a decrease in the polarity of the local environment of the gold cores. The red shift in the deposited films is due to a weak interparticle coupling between adjacent particles. Temperature effects on the SPR band in both cases were also investigated. In the in situ case, at a constant surface pressure, an increase in temperature leads to a red shift in the SPR, likely due to the shrinking of the tethered PNIPAM chains, as well as to a slight decrease of the distance between the adjacent particles resulting in an increase in the interparticle coupling. However, in the case of the deposited films, the SPR band red-shifts with the deposition cycles more at a high temperature than at a low temperature. This is because the compressibility of the polymer coated gold nanoparticles at a high temperature leads to a smaller interparticle distance, resulting in an increase of the interparticle coupling in the deposited multilayers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human-wildlife conflicts are today an integral part of the rural development discourse. In this research, the main focus is on the spatial explanation which is not a very common approach in the reviewed literature. My research hypothesis is based on the assumption that human-wildlife conflicts occur when a wild animal crosses a perceived borderline between the nature and culture and enters into the realms of the other. The borderline between nature and culture marks a perceived division of spatial content in our senses of place. The animal subject that crosses this border becomes a subject out of place meaning that the animal is then spatially located in a space where it should not be or where it does not belong according to tradition, custom, rules, law, public opinion, prevailing discourse or some other criteria set by human beings. An appearance of a wild animal in a domesticated space brings an uncontrolled subject into that space where humans have previously commanded total control of all other natural elements. A wild animal out of place may also threaten the biosecurity of the place in question. I carried out a case study in the Liwale district in south-eastern Tanzania to test my hypothesis during June and July 2002. I also collected documents and carried out interviews in Dar es Salaam in 2003. I studied the human-wildlife conflicts in six rural villages, where a total of 183 persons participated in the village meetings. My research methods included semi-structured interviews, participatory mapping, questionnaire survey and Q- methodology. The rural communities in the Liwale district have a long-history of co-existing with wildlife and they still have traditional knowledge of wildlife management and hunting. Wildlife conservation through the establishment of game reserves during the colonial era has escalated human-wildlife conflicts in the Liwale district. This study shows that the villagers perceive some wild animals differently in their images of the African countryside than the district and regional level civil servants do. From the small scale subsistence farmers point of views, wild animals continue to challenge the separation of the wild (the forests) and the domestics spaces (the cultivated fields) by moving across the perceived borders in search of food and shelter. As a result, the farmers may loose their crops, livestock or even their own lives in the confrontations of wild animals. Human-wildlife conflicts in the Liwale district are manifold and cannot be explained simply on the basis of attitudes or perceived images of landscapes. However, the spatial explanation of these conflicts provides us some more understanding of why human-wildlife conflicts are so widely found across the world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Palaeoenvironments of the latter half of the Weichselian ice age and the transition to the Holocene, from ca. 52 to 4 ka, were investigated using isotopic analysis of oxygen, carbon and strontium in mammal skeletal apatite. The study material consisted predominantly of subfossil bones and teeth of the woolly mammoth (Mammuthus primigenius Blumenbach), collected from Europe and Wrangel Island, northeastern Siberia. All samples have been radiocarbon dated, and their ages range from >52 ka to 4 ka. Altogether, 100 specimens were sampled for the isotopic work. In Europe, the studies focused on the glacial palaeoclimate and habitat palaeoecology. To minimise the influence of possible diagenetic effects, the palaeoclimatological and ecological reconstructions were based on the enamel samples only. The results of the oxygen isotope analysis of mammoth enamel phosphate from Finland and adjacent nortwestern Russia, Estonia, Latvia, Lithuania, Poland, Denmark and Sweden provide the first estimate of oxygen isotope values in glacial precipitation in northern Europe. The glacial precipitation oxygen isotope values range from ca. -9.2±1.5 in western Denmark to -15.3 in Kirillov, northwestern Russia. These values are 0.6-4.1 lower than those in present-day precipitation, with the largest changes recorded in the currently marine influenced southern Sweden and the Baltic region. The new enamel-derived oxygen isotope data from this study, combined with oxygen isotope records from earlier investigations on mammoth tooth enamel and palaeogroundwaters, facilitate a reconstruction of the spatial patterns of the oxygen isotope values of precipitation and palaeotemperatures over much of Europe. The reconstructed geographic pattern of oxygen isotope levels in precipitation during 52-24 ka reflects the progressive isotopic depletion of air masses moving northeast, consistent with a westerly source of moisture for the entire region, and a circulation pattern similar to that of the present-day. The application of regionally varied δ/T-slopes, estimated from palaeogroundwater data and modern spatial correlations, yield reasonable estimates of glacial surface temperatures in Europe and imply 2-9°C lower long-term mean annual surface temperatures during the glacial period. The isotopic composition of carbon in the enamel samples indicates a pure C3 diet for the European mammoths, in agreement with previous investigations of mammoth ecology. A faint geographical gradient in the carbon isotope values of enamel is discernible, with more negative values in the northeast. The spatial trend is consistent with the climatic implications of the enamel oxygen isotope data, but may also suggest regional differences in habitat openness. The palaeogeographical changes caused by the eustatic rise of global sea level at the end of the Weichselian ice age was investigated on Wrangel Island, using the strontium isotope (Sr-87/Sr-86) ratios in the skeletal apatite of the local mammoth fauna. The diagenetic evaluations suggest good preservation of the original Sr isotope ratios, even in the bone specimens included in the study material. To estimate present-day environmental Sr isotope values on Wrangel Island, bioapatite samples from modern reindeer and muskoxen, as well as surface waters from rivers and ice wedges were analysed. A significant shift towards more radiogenic bioapatite Sr isotope ratios, from 0.71218 ± 0.00103 to 0.71491 ± 0.00138, marks the beginning of the Holocene. This implies a change in the migration patterns of the mammals, ultimately reflecting the inundation of the mainland connection and isolation of the population. The bioapatite Sr isotope data supports published coastline reconstructions placing the time of separation from the mainland to ca. 10-10.5 ka ago. The shift towards more radiogenic Sr isotope values in mid-Holocene subfossil remains after 8 ka ago reflects the rapid rise of the sea level from 10 to 8 ka, resulting in a considerable reduction of the accessible range area on the early Wrangel Island.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless network access is gaining increased heterogeneity in terms of the types of IP capable access technologies. The access network heterogeneity is an outcome of incremental and evolutionary approach of building new infrastructure. The recent success of multi-radio terminals drives both building a new infrastructure and implicit deployment of heterogeneous access networks. Typically there is no economical reason to replace the existing infrastructure when building a new one. The gradual migration phase usually takes several years. IP-based mobility across different access networks may involve both horizontal and vertical handovers. Depending on the networking environment, the mobile terminal may be attached to the network through multiple access technologies. Consequently, the terminal may send and receive packets through multiple networks simultaneously. This dissertation addresses the introduction of IP Mobility paradigm into the existing mobile operator network infrastructure that have not originally been designed for multi-access and IP Mobility. We propose a model for the future wireless networking and roaming architecture that does not require revolutionary technology changes and can be deployed without unnecessary complexity. The model proposes a clear separation of operator roles: (i) access operator, (ii) service operator, and (iii) inter-connection and roaming provider. The separation allows each type of an operator to have their own development path and business models without artificial bindings with each other. We also propose minimum requirements for the new model. We present the state of the art of IP Mobility. We also present results of standardization efforts in IP-based wireless architectures. Finally, we present experimentation results of IP-level mobility in various wireless operator deployments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines philosophically the main theories and methodological assumptions of the field known as the cognitive science of religion (CSR). The study makes a philosophically informed reconstruction of the methodological principles of the CSR, indicates problems with them, and examines possible solutions to these problems. The study focuses on several different CSR writers, namely, Scott Atran, Justin Barrett, Pascal Boyer and Dan Sperber. CSR theorising is done in the intersection between cognitive sciences, anthropology and evolutionary psychology. This multidisciplinary nature makes CSR a fertile ground for philosophical considerations coming from philosophy of psychology, philosophy of mind and philosophy of science. The study begins by spelling out the methodological assumptions and auxiliary theories of CSR writers by situating these theories and assumptions in the nexus of existing approaches to religion. The distinctive feature of CSR is its emphasis on information processing: CSR writers claim that contemporary cognitive sciences can inform anthropological theorising about the human mind and offer tools for producing causal explanations. Further, they claim to explain the prevalence and persistence of religion by cognitive systems that undergird religious thinking. I also examine the core theoretical contributions of the field focusing mainly on the (1) “minimally counter-intuitiveness hypothesis” and (2) the different ways in which supernatural agent representations activate our cognitive systems. Generally speaking, CSR writers argue for the naturalness of religion: religious ideas and practices are widespread and pervasive because human cognition operates in such a way that religious ideas are easy to acquire and transmit. The study raises two philosophical problems, namely, the “problem of scope” and the “problem of religious relevance”. The problem of scope is created by the insistence of several critics of the CSR that CSR explanations are mostly irrelevant for explaining religion. Most CSR writers themselves hold that cognitive explanations can answer most of our questions about religion. I argue that the problem of scope is created by differences in explanation-begging questions: the former group is interested in explaining different things than the latter group. I propose that we should not stick too rigidly to one set of methodological assumptions, but rather acknowledge that different assumptions might help us to answer different questions about religion. Instead of adhering to some robust metaphysics as some strongly naturalistic writers argue, we should adopt a pragmatic and explanatory pluralist approach which would allow different kinds of methodological presuppositions in the study of religion provided that they attempt to answer different kinds of why-questions, since religion appears to be a multi-faceted phenomenon that spans over a variety of fields of special sciences. The problem of religious relevance is created by the insistence of some writers that CSR theories show religious beliefs to be false or irrational, whereas others invoke CSR theories to defend certain religious ideas. The problem is interesting because it reveals the more general philosophical assumptions of those who make such interpretations. CSR theories can (and have been) interpreted in terms of three different philosophical frameworks: strict naturalism, broad naturalism and theism. I argue that CSR theories can be interpreted inside all three frameworks without doing violence to the theories and that these frameworks give different kinds of results regarding the religious relevance of CSR theories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wood is an important material for the construction and pulping industries. Using x-ray diffraction the microfibril angle of Sitka spruce wood was studied in the first part of this thesis. Sitka spruce (Picea sitchensis [Bong.] Carr.) is native to the west coast of North America, but due to its fast growth rate, it has also been imported to Europe. So far, its nanometre scale properties have not been systematically characterised. In this thesis the microfibril angle of Sitka spruce was shown to depend significantly on the origin of the tree in the first annual rings near the pith. Wood can be further processed to separate lignin from cellulose and hemicelluloses. Solid cellulose can act as a reducer for metal ions and it is also a porous support for nanoparticles. By chemically reducing nickel or copper in the solid cellulose support it is possible to get small nanoparticles on the surfaces of the cellulose fibres. Cellulose supported metal nanoparticles can potentially be used as environmentally friendly catalysts in organic chemistry reactions. In this thesis the size of the nickel and copper containing nanoparticles were studied using anomalous small-angle x-ray scattering and wide-angle x-ray scattering. The anomalous small-angle x-ray scattering experiments showed that the crystallite size of the copper oxide nanoparticles was the same as the size of the nanoparticles, so the nanoparticles were single crystals. The nickel containing nanoparticles were amorphous, but crystallised upon heating. The size of the nanoparticles was observed to be smaller when the reduction of nickel was done in aqueous ammonium hydrate medium compared to reduction made in aqueous solution. Lignin is typically seen as the side-product of wood industries. Lignin is the second most abundant natural polymer on Earth, and it possesses potential to be a useful material for many purposes in addition to being an energy source for the pulp mills. In this thesis, the morphology of several lignins, which were produced by different separation methods from wood, was studied using small-angle and ultra small-angle x-ray scattering. It was shown that the fractal model previously proposed for the lignin structure does not apply to most of the extracted lignin types. The only lignin to which the fractal model could be applied was kraft lignin. In aqueous solutions the average shape of the low molar mass kraft lignin particles was observed to be elongated and flat. The average shape does not necessarily correspond to the shape of the individual particles because of the polydispersity of the fraction and due to selfassociation of the particles. Lignins, and especially lignosulfonate, have many uses as dispersants, binders and emulsion stabilisers. In this thesis work the selfassociation of low molar mass lignosulfonate macromolecules was observed using small-angle x-ray scattering. By taking into account the polydispersity of the studied lignosulfonate fraction, the shape of the lignosulfonate particles was determined to be flat by fitting an oblate ellipsoidal model to the scattering intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature review elucidates the mechanism of oxidation in proteins and amino acids and gives an overview of the detection and analysis of protein oxidation products as well as information about ?-lactoglobulin and studies carried out on modifications of this protein under certain conditions. The experimental research included the fractionation of the tryptic peptides of ?-lactoglobulin using preparative-HPLC-MS and monitoring the oxidation process of these peptides via reverse phase-HPLC-UV. Peptides chosen to be oxidized were selected with respect to their amino acid content which were susceptible to oxidation and fractionated according to their m/z values. These peptides were: IPAVFK (m/z 674), ALPMHIR (m/z 838), LIVTQTMK (m/z 934) and VLVLDTDYK (m/z 1066). Even though it was not possible to solely isolate the target peptides due to co-elution of various fractions, the percentages of target peptides in the samples were satisfactory to carry out the oxidation procedure. IPAVFK and VLVLDTDYK fractions were found to yield the oxidation products reviewed in literature, however, unoxidized peptides were still present in high amounts after 21 days of oxidation. The UV data at 260 and 280 nm enabled to monitor both the main peptides and the oxidation products due to the absorbance of aromatic side-chains these peptides possess. ALPMHIR and LIVTQTMK fractions were oxidatively consumed rapidly and oxidation products of these peptides were observed even on day 0. High rates of depletion of these peptides were acredited to the presence of His (H) and sulfur-containing side-chains of Met (M). In conclusion, selected peptides hold the potential to be utilized as marker peptides in ?-lactoglobulin oxidation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.