16 resultados para cardiac signals, EEG signals, analysis, higher order spectra
em Helda - Digital Repository of University of Helsinki
Resumo:
This thesis is a study of a rather new logic called dependence logic and its closure under classical negation, team logic. In this thesis, dependence logic is investigated from several aspects. Some rules are presented for quantifier swapping in dependence logic and team logic. Such rules are among the basic tools one must be familiar with in order to gain the required intuition for using the logic for practical purposes. The thesis compares Ehrenfeucht-Fraïssé (EF) games of first order logic and dependence logic and defines a third EF game that characterises a mixed case where first order formulas are measured in the formula rank of dependence logic. The thesis contains detailed proofs of several translations between dependence logic, team logic, second order logic and its existential fragment. Translations are useful for showing relationships between the expressive powers of logics. Also, by inspecting the form of the translated formulas, one can see how an aspect of one logic can be expressed in the other logic. The thesis makes preliminary investigations into proof theory of dependence logic. Attempts focus on finding a complete proof system for a modest yet nontrivial fragment of dependence logic. A key problem is identified and addressed in adapting a known proof system of classical propositional logic to become a proof system for the fragment, namely that the rule of contraction is needed but is unsound in its unrestricted form. A proof system is suggested for the fragment and its completeness conjectured. Finally, the thesis investigates the very foundation of dependence logic. An alternative semantics called 1-semantics is suggested for the syntax of dependence logic. There are several key differences between 1-semantics and other semantics of dependence logic. 1-semantics is derived from first order semantics by a natural type shift. Therefore 1-semantics reflects an established semantics in a coherent manner. Negation in 1-semantics is a semantic operation and satisfies the law of excluded middle. A translation is provided from unrestricted formulas of existential second order logic into 1-semantics. Also game theoretic semantics are considerd in the light of 1-semantics.
Resumo:
Paramagnetic, or open-shell, systems are often encountered in the context of metalloproteins, and they are also an essential part of molecular magnets. Nuclear magnetic resonance (NMR) spectroscopy is a powerful tool for chemical structure elucidation, but for paramagnetic molecules it is substantially more complicated than in the diamagnetic case. Before the present work, the theory of NMR of paramagnetic molecules was limited to spin-1/2 systems and it did not include relativistic corrections to the hyperfine effects. It also was not systematically expandable. --- The theory was first expanded by including hyperfine contributions up to the fourth power in the fine structure constant α. It was then reformulated and its scope widened to allow any spin state in any spatial symmetry. This involved including zero-field splitting effects. In both stages the theory was implemented into a separate analysis program. The different levels of theory were tested by demonstrative density functional calculations on molecules selected to showcase the relative strength of new NMR shielding terms. The theory was also tested in a joint experimental and computational effort to confirm assignment of 11 B signals. The new terms were found to be significant and comparable with the terms in the earlier levels of theory. The leading-order magnetic-field dependence of shielding in paramagnetic systems was formulated. The theory is now systematically expandable, allowing for higher-order field dependence and relativistic contributions. The prevailing experimental view of pseudocontact shift was found to be significantly incomplete, as it only includes specific geometric dependence, which is not present in most of the new terms introduced here. The computational uncertainty in density functional calculations of the Fermi contact hyperfine constant and zero-field splitting tensor sets a limit for quantitative prediction of paramagnetic shielding for now.
Resumo:
The purpose of this study is to describe the development of application of mass spectrometry for the structural analyses of non-coding ribonucleic acids during past decade. Mass spectrometric methods are compared of traditional gel electrophoretic methods, the characteristics of performance of mass spectrometric, analyses are studied and the future trends of mass spectrometry of ribonucleic acids are discussed. Non-coding ribonucleic acids are short polymeric biomolecules which are not translated to proteins, but which may affect the gene expression in all organisms. Regulatory ribonucleic acids act through transient interactions with key molecules in signal transduction pathways. Interactions are mediated through specific secondary and tertiary structures. Posttranscriptional modifications in the structures of molecules may introduce new properties to the organism, such as adaptation to environmental changes or development of resistance to antibiotics. In the scope of this study, the structural studies include i) determination of the sequence of nucleobases in the polymer chain, ii) characterisation and localisation of posttranscriptional modifications in nucleobases and in the backbone structure, iii) identification of ribonucleic acid-binding molecules and iv) probing of higher order structures in the ribonucleic acid molecule. Bacteria, archaea, viruses and HeLa cancer cells have been used as target organisms. Synthesised ribonucleic acids consisting of structural regions of interest have been frequently used. Electrospray ionisation (ESI) and matrix-assisted laser desorption ionisation (MALDI) have been used for ionisation of ribonucleic analytes. Ammonium acetate and 2-propanol are common solvents for ESI. Trihydroxyacetophenone is the optimal MALDI matrix for ionisation of ribonucleic acids and peptides. Ammonium salts are used in ESI buffers and MALDI matrices as additives to remove cation adducts. Reverse phase high performance liquid chromatography has been used for desalting and fractionation of analytes either off-line of on-line, coupled with ESI source. Triethylamine and triethylammonium bicarbonate are used as ion pair reagents almost exclusively. Fourier transform ion cyclotron resonance analyser using ESI coupled with liquid chromatography is the platform of choice for all forms of structural analyses. Time-of-flight (TOF) analyser using MALDI may offer sensitive, easy-to-use and economical solution for simple sequencing of longer oligonucleotides and analyses of analyte mixtures without prior fractionation. Special analysis software is used for computer-aided interpretation of mass spectra. With mass spectrometry, sequences of 20-30 nucleotides of length may be determined unambiguously. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Sequencing in conjunction with other structural studies enables accurate localisation and characterisation of posttranscriptional modifications and identification of nucleobases and amino acids at the sites of interaction. High throughput screening methods for RNA-binding ligands have been developed. Probing of the higher order structures has provided supportive data for computer-generated three dimensional models of viral pseudoknots. In conclusion. mass spectrometric methods are well suited for structural analyses of small species of ribonucleic acids, such as short non-coding ribonucleic acids in the molecular size region of 20-30 nucleotides. Structural information not attainable with other methods of analyses, such as nuclear magnetic resonance and X-ray crystallography, may be obtained with the use of mass spectrometry. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Ligand screening may be used in the search of possible new therapeutic agents. Demanding assay design and challenging interpretation of data requires multidisclipinary knowledge. The implement of mass spectrometry to structural studies of ribonucleic acids is probably most efficiently conducted in specialist groups consisting of researchers from various fields of science.
Resumo:
The adequacy of anesthesia has been studied since the introduction of balanced general anesthesia. Commercial monitors based on electroencephalographic (EEG) signal analysis have been available for monitoring the hypnotic component of anesthesia from the beginning of the 1990s. Monitors measuring the depth of anesthesia assess the cortical function of the brain, and have gained acceptance during surgical anesthesia with most of the anesthetic agents used. However, due to frequent artifacts, they are considered unsuitable for monitoring consciousness in intensive care patients. The assessment of analgesia is one of the cornerstones of general anesthesia. Prolonged surgical stress may lead to increased morbidity and delayed postoperative recovery. However, no validated monitoring method is currently available for evaluating analgesia during general anesthesia. Awareness during anesthesia is caused by an inadequate level of hypnosis. This rare but severe complication of general anesthesia may lead to marked emotional stress and possibly posttraumatic stress disorder. In the present series of studies, the incidence of awareness and recall during outpatient anesthesia was evaluated and compared with that of in inpatient anesthesia. A total of 1500 outpatients and 2343 inpatients underwent a structured interview. Clear intraoperative recollections were rare the incidence being 0.07% in outpatients and 0.13% in inpatients. No significant differences emerged between outpatients and inpatients. However, significantly smaller doses of sevoflurane were administered to outpatients with awareness than those without recollections (p<0.05). EEG artifacts in 16 brain-dead organ donors were evaluated during organ harvest surgery in a prospective, open, nonselective study. The source of the frontotemporal biosignals in brain-dead subjects was studied, and the resistance of bispectral index (BIS) and Entropy to the signal artifacts was compared. The hypothesis was that in brain-dead subjects, most of the biosignals recorded from the forehead would consist of artifacts. The original EEG was recorded and State Entropy (SE), Response Entropy (RE), and BIS were calculated and monitored during solid organ harvest. SE differed from zero (inactive EEG) in 28%, RE in 29%, and BIS in 68% of the total recording time (p<0.0001 for all). The median values during the operation were SE 0.0, RE 0.0, and BIS 3.0. In four of the 16 organ donors, EEG was not inactive, and unphysiologically distributed, nonreactive rhythmic theta activity was present in the original EEG signal. After the results from subjects with persistent residual EEG activity were excluded, SE, RE, and BIS differed from zero in 17%, 18%, and 62% of the recorded time, respectively (p<0.0001 for all). Due to various artifacts, the highest readings in all indices were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, electromyography (EMG), 50-Hz artifact, handling of the donor, ballistocardiography, and electrocardiography. In a prospective, randomized study of 26 patients, the ability of Surgical Stress Index (SSI) to differentiate patients with two clinically different analgesic levels during shoulder surgery was evaluated. SSI values were lower in patients with an interscalene brachial plexus block than in patients without an additional plexus block. In all patients, anesthesia was maintained with desflurane, the concentration of which was targeted to maintain SE at 50. Increased blood pressure or heart rate (HR), movement, and coughing were considered signs of intraoperative nociception and treated with alfentanil. Photoplethysmographic waveforms were collected from the contralateral arm to the operated side, and SSI was calculated offline. Two minutes after skin incision, SSI was not increased in the brachial plexus block group and was lower (38 ± 13) than in the control group (58 ± 13, p<0.005). Among the controls, one minute prior to alfentanil administration, SSI value was higher than during periods of adequate antinociception, 59 ± 11 vs. 39 ± 12 (p<0.01). The total cumulative need for alfentanil was higher in controls (2.7 ± 1.2 mg) than in the brachial plexus block group (1.6 ± 0.5 mg, p=0.008). Tetanic stimulation to the ulnar region of the hand increased SSI significantly only among patients with a brachial plexus block not covering the site of stimulation. Prognostic value of EEG-derived indices was evaluated and compared with Transcranial Doppler Ultrasonography (TCD), serum neuron-specific enolase (NSE) and S-100B after cardiac arrest. Thirty patients resuscitated from out-of-hospital arrest and treated with induced mild hypothermia for 24 h were included. Original EEG signal was recorded, and burst suppression ratio (BSR), RE, SE, and wavelet subband entropy (WSE) were calculated. Neurological outcome during the six-month period after arrest was assessed with the Glasgow-Pittsburgh Cerebral Performance Categories (CPC). Twenty patients had a CPC of 1-2, one patient had a CPC of 3, and nine patients died (CPC 5). BSR, RE, and SE differed between good (CPC 1-2) and poor (CPC 3-5) outcome groups (p=0.011, p=0.011, p=0.008, respectively) during the first 24 h after arrest. WSE was borderline higher in the good outcome group between 24 and 48 h after arrest (p=0.050). All patients with status epilepticus died, and their WSE values were lower (p=0.022). S-100B was lower in the good outcome group upon arrival at the intensive care unit (p=0.010). After hypothermia treatment, NSE and S-100B values were lower (p=0.002 for both) in the good outcome group. The pulsatile index was also lower in the good outcome group (p=0.004). In conclusion, the incidence of awareness in outpatient anesthesia did not differ from that in inpatient anesthesia. Outpatients are not at increased risk for intraoperative awareness relative to inpatients undergoing general anesthesia. SE, RE, and BIS showed non-zero values that normally indicate cortical neuronal function, but were in these subjects mostly due to artifacts after clinical brain death diagnosis. Entropy was more resistant to artifacts than BIS. During general anesthesia and surgery, SSI values were lower in patients with interscalene brachial plexus block covering the sites of nociceptive stimuli. In detecting nociceptive stimuli, SSI performed better than HR, blood pressure, or RE. BSR, RE, and SE differed between the good and poor neurological outcome groups during the first 24 h after cardiac arrest, and they may be an aid in differentiating patients with good neurological outcomes from those with poor outcomes after out-of-hospital cardiac arrest.
Resumo:
The main purpose of the research was to illustrate chemistry matriculation examination questions as a summative assessment tool, and represent how the questions have evolved over the years. Summative assessment and its various test item classifications, Finnish goal-oriented curriculum model, and Bloom’s Revised Taxonomy of Cognitive Objectives formed the theoretical framework for the research. The research data consisted of 257 chemistry questions from 28 matriculation examinations between 1996 and 2009. The analysed test questions were formulated according to the national upper secondary school chemistry curricula 1994, and 2003. Qualitative approach and theory-driven content analysis method were employed in the research. Peer review was used to guarantee the reliability of the results. The research was guided by the following questions: (a) What kinds of test item formats are used in chemistry matriculation examinations? (b) How the fundamentals of chemistry are included in the chemistry matriculation examination questions? (c) What kinds of cognitive knowledge and skills do the chemistry matriculation examination questions require? The research indicates that summative assessment was used diversely in chemistry matriculation examinations. The tests included various test item formats, and their combinations. The majority of the test questions were constructed-response items that were either verbal, quantitative, or experimental questions, symbol questions, or combinations of the aforementioned. The studied chemistry matriculation examinations seldom included selected-response items that can be either multiple-choice, alternate choice, or matching items. The relative emphasis of the test item formats differed slightly depending on whether the test was a part of an extensive general studies battery of tests in sciences and humanities, or a subject-specific test. The classification framework developed in the research can be applied in chemistry and science education, and also in educational research. Chemistry matriculation examinations are based on the goal-oriented curriculum model, and cover relatively well the fundamentals of chemistry included in the national curriculum. Most of the test questions related to the symbolism of chemical equation, inorganic and organic reaction types and applications, the bonding and spatial structure in organic compounds, and stoichiometry problems. Only a few questions related to electrolysis, polymers, or buffer solutions. None of the test questions related to composites. There were not any significant differences in the emphasis between the tests formulated according to the national curriculum 1994 or 2003. Chemistry matriculation examinations are cognitively demanding. The research shows that the majority of the test questions require higher-order cognitive skills. Most of the questions required analysis of procedural knowledge. The questions that only required remembering or processing metacognitive knowledge, were not included in the research data. The required knowledge and skill level varied slightly between the test questions in the extensive general studies battery of tests in sciences and humanities, and subject-specific tests administered since 2006. The proportion of the Finnish chemistry matriculation examination questions requiring higher-order cognitive knowledge and skills is very large compared to what is discussed in the research literature.
Resumo:
Visual information processing in brain proceeds in both serial and parallel fashion throughout various functionally distinct hierarchically organised cortical areas. Feedforward signals from retina and hierarchically lower cortical levels are the major activators of visual neurons, but top-down and feedback signals from higher level cortical areas have a modulating effect on neural processing. My work concentrates on visual encoding in hierarchically low level cortical visual areas in human brain and examines neural processing especially in cortical representation of visual field periphery. I use magnetoencephalography and functional magnetic resonance imaging to measure neuromagnetic and hemodynamic responses during visual stimulation and oculomotor and cognitive tasks from healthy volunteers. My thesis comprises six publications. Visual cortex forms a great challenge for modeling of neuromagnetic sources. My work shows that a priori information of source locations are needed for modeling of neuromagnetic sources in visual cortex. In addition, my work examines other potential confounding factors in vision studies such as light scatter inside the eye which may result in erroneous responses in cortex outside the representation of stimulated region, and eye movements and attention. I mapped cortical representations of peripheral visual field and identified a putative human homologue of functional area V6 of the macaque in the posterior bank of parieto-occipital sulcus. My work shows that human V6 activates during eye-movements and that it responds to visual motion at short latencies. These findings suggest that human V6, like its monkey homologue, is related to fast processing of visual stimuli and visually guided movements. I demonstrate that peripheral vision is functionally related to eye-movements and connected to rapid stream of functional areas that process visual motion. In addition, my work shows two different forms of top-down modulation of neural processing in the hierachically lowest cortical levels; one that is related to dorsal stream activation and may reflect motor processing or resetting signals that prepare visual cortex for change in the environment and another local signal enhancement at the attended region that reflects local feed-back signal and may perceptionally increase the stimulus saliency.
Resumo:
This three-phase design research describes the modelling processes for DC-circuit phenomena. The first phase presents an analysis of the development of the DC-circuit historical models in the context of constructing Volta s pile at the turn of the 18th century. The second phase involves the designing of a teaching experiment for comprehensive school third graders. Among other considerations, the design work utilises the results of the first phase and research literature of pupils mental models for DC-circuit phenomena. The third phase of the research was concerned with the realisation of the planned teaching experiment. The aim of this phase was to study the development of the external representations of DC-circuit phenomena in a small group of third graders. The aim of the study has been to search for new ways to guide pupils to learn DC-circuit phenomena while emphasing understanding at the qualitative level. Thus, electricity, which has been perceived as a difficult and abstract subject, could be learnt more comprehensively. Especially, the research of younger pupils learning of electricity concepts has not been of great interest at the international level, although DC-circuit phenomena are also taught in the lower classes of comprehensive schools. The results of this study are important, because there has tended to be more teaching of natural sciences in the lower classes of comprehensive schools, and attempts are being made to develop this trend in Finland. In the theoretical part of the research an Experimental-centred representation approach, which emphasises the role of experimentalism in the development of pupil s representations, is created. According to this approach learning at the qualitative level consists of empirical operations like experimenting, observations, perception, and prequantification of nature phenomena, and modelling operations like explaining and reasoning. Besides planning teaching, the new approach can be used as an analysis tool in describing both historical modelling and the development of pupils representations. In the first phase of the study, the research question was: How did the historical models of DC-circuit phenomena develop in Volta s time? The analysis uncovered three qualitative historical models associated with the historical concept formation process. The models include conceptions of the electric circuit as a scene in the DC-circuit phenomena, the comparative electric-current phenomenon as a cause of different observable effect phenomena, and the strength of the battery as a cause of the electric-current phenomenon. These models describe the concept formation process and its phases in Volta s time. The models are portrayed in the analysis using fragments of the models, where observation-based fragments and theoretical fragements are distinguished from each other. The results emphasise the significance of the qualitative concept formation and the meaning of language in the historical modelling of DC-circuit phenomena. For this reason these viewpoints are stressed in planning the teaching experiment in the second phase of the research. In addition, the design process utilised the experimentation behind the historical models of DC-circuit phenomena In the third phase of the study the research question is as follows: How will the small group s external representations of DC-circuit phenomena develop during the teaching experiment? The main question is divided into the following two sub questions: What kind of talk exists in the small group s learning? What kinds of external representations for DC-circuit phenomena exist in the small group discourse during the teaching experiment? The analysis revealed that the teaching experiment of the small group succeeded in its aim to activate talk in the small group. The designed connection cards proved especially successful in activating talk. The connection cards are cards that represent the components of the electric circuit. In the teaching experiment the pupils constructed different connections with the connection cards and discussed, what kinds of DC-circuit phenomena would take place in the corresponding real connections. The talk of the small group was analysed by comparing two situations, firstly, when the small group discussed using connections made with the connection cards and secondly with the same connections using real components. According to the results the talk of the small group included more higher-order thinking when using the connection cards than with similar real components. In order to answer the second sub question concerning the small group s external representations that appeared in the talk during the teaching experiment; student talk was visualised by the fragment maps which incorporate the electric circuit, the electric current and the source voltage. The fragment maps represent the gradual development of the external representations of DC-circuit phenomena in the small group during the teaching experiment. The results of the study challenge the results of previous research into the abstractness and difficulty of electricity concepts. According to this research, the external representations of DC-circuit phenomena clearly developed in the small group of third graders. Furthermore, the fragment maps uncover that although the theoretical explanations of DC-circuit phenomena, which have been obtained as results of typical mental model studies, remain undeveloped, learning at the qualitative level of understanding does take place.
Resumo:
The aim of this dissertation was to explore how different types of prior knowledge influence student achievement and how different assessment methods influence the observed effect of prior knowledge. The project started by creating a model of prior knowledge which was tested in various science disciplines. Study I explored the contribution of different components of prior knowledge on student achievement in two different mathematics courses. The results showed that the procedural knowledge components which require higher-order cognitive skills predicted the final grades best and were also highly related to previous study success. The same pattern regarding the influence of prior knowledge was also seen in Study III which was a longitudinal study of the accumulation of prior knowledge in the context of pharmacy. The study analysed how prior knowledge from previous courses was related to student achievement in the target course. The results implied that students who possessed higher-level prior knowledge, that is, procedural knowledge, from previous courses also obtained higher grades in the more advanced target course. Study IV explored the impact of different types of prior knowledge on students’ readiness to drop out from the course, on the pace of completing the course and on the final grade. The study was conducted in the context of chemistry. The results revealed again that students who performed well in the procedural prior-knowledge tasks were also likely to complete the course in pre-scheduled time and get higher final grades. On the other hand, students whose performance was weak in the procedural prior-knowledge tasks were more likely to drop out or take a longer time to complete the course. Study II explored the issue of prior knowledge from another perspective. Study II aimed to analyse the interrelations between academic self-beliefs, prior knowledge and student achievement in the context of mathematics. The results revealed that prior knowledge was more predictive of student achievement than were other variables included in the study. Self-beliefs were also strongly related to student achievement, but the predictive power of prior knowledge overruled the influence of self-beliefs when they were included in the same model. There was also a strong correlation between academic self-beliefs and prior-knowledge performance. The results of all the four studies were consistent with each other indicating that the model of prior knowledge may be used as a potential tool for prior knowledge assessment. It is useful to make a distinction between different types of prior knowledge in assessment since the type of prior knowledge students possess appears to make a difference. The results implied that there indeed is variation between students’ prior knowledge and academic self-beliefs which influences student achievement. This should be taken into account in instruction.
Resumo:
In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.
Resumo:
The dissertation consists of four essays and a comprehensive introduction that discusses the topics, methods, and most prominent theories of philosophical moral psychology. I distinguish three main questions: What are the essential features of moral thinking? What are the psychological conditions of moral responsibility? And finally, what are the consequences of empirical facts about human nature to normative ethics? Each of the three last articles focuses on one of these issues. The first essay and part of the introduction are dedicated to methodological questions, in particular the relationship between empirical (social) psychology and philosophy. I reject recent attempts to understand the nature of morality on the basis of empirical research. One characteristic feature of moral thinking is its practical clout: if we regard an action as morally wrong, we either refrain from doing it even against our desires and interests, or else feel shame or guilt. Moral views seem to have a conceptual connection to motivation and emotions – roughly speaking, we can’t conceive of someone genuinely disapproving an action, but nonetheless doing it without any inner motivational conflict or regret. This conceptual thesis in moral psychology is called (judgment) internalism. It implies, among other things, that psychopaths cannot make moral judgments to the extent that they are incapable of corresponding motivation and emotion, even if they might say largely the words we would expect. Is internalism true? Recently, there has been an explosion of interest in so-called experimental philosophy, which is a methodological view according to which claims about conceptual truths that appeal to our intuitions should be tested by way of surveys presented to ordinary language users. One experimental result is that the majority of people are willing to grant that psychopaths make moral judgments, which challenges internalism. In the first article, ‘The Rise and Fall of Experimental Philosophy’, I argue that these results pose no real threat to internalism, since experimental philosophy is based on a too simple conception of the relationship between language use and concepts. Only the reactions of competent users in pragmatically neutral and otherwise conducive circumstances yield evidence about conceptual truths, and such robust intuitions remain inaccessible to surveys for reasons of principle. The epistemology of folk concepts must still be based on Socratic dialogue and critical reflection, whose character and authority I discuss at the end of the paper. The internal connection between moral judgment and motivation led many metaethicists in the past century to believe along Humean lines that judgment itself consists in a pro-attitude rather than a belief. This expressivist view, as it is called these days, has far-reaching consequences in metaethics. In the second essay I argue that perhaps the most sophisticated form of contemporary expressivism, Allan Gibbard’s norm-expressivism, according to which moral judgments are decisions or contingency plans, is implausible from the perspective of the theory of action. In certain circumstances it is possible to think that something is morally required of one without deciding to do so. Morality is not a matter of the will. Instead, I sketch on the basis of Robert Brandom’s inferentialist semantics a weak form of judgment internalism, according to which the content of moral judgment is determined by a commitment to a particular kind of practical reasoning. The last two essays in the dissertation emphasize the role of mutual recognition in the development and maintenance of responsible and autonomous moral agency. I defend a compatibilist view of autonomy, according to which agents who are unable to recognize right and wrong or act accordingly are not responsible for their actions – it is not fair to praise or blame them, since they lacked the relevant capacity to do otherwise. Conversely, autonomy demands an ability to recognize reasons and act on them. But as a long tradition in German moral philosophy whose best-known contemporary representative is Axel Honneth has it, both being aware of reasons and acting on them requires also the right sort of higher-order attitudes toward the self. Without self-respect and self-confidence we remain at the mercy of external pressures, even if we have the necessary normative competence. These attitudes toward the self, in turn, are formed through mutual recognition – we value ourselves when those who we value value us. Thus, standing in the right sort of relations of recognition is indirectly necessary for autonomy and moral responsibility. Recognition and valuing are concretely manifest in actions and institutions, whose practices make possible participation on an equal footing. Seeing this opens the way for a kind of normative social criticism that is grounded in the value of freedom and automomy, but is not limited to defending negative rights. It thus offers a new way to bridge the gap between liberalism and communitarianism.
Resumo:
This is a study of ultra-cold Fermi gases in different systems. This thesis is focused on exotic superfluid states, for an example on the three component Fermi gas and the FFLO phase in optical lattices. In the two-components case, superfluidity is studied mainly in the case of the spin population imbalanced Fermi gases and the phase diagrams are calculated from the mean-field theory. Different methods to detect different phases in optical lattices are suggested. In the three-component case, we studied also the uniform gas and harmonically trapped system. In this case, the BCS theory is generalized to three-component gases. It is also discussed how to achieve the conditions to get an SU(3)-symmetric Hamiltonian in optical lattices. The thesis is divided in chapters as follows: Chapter 1 is an introduction to the field of cold quantum gases. In chapter 2 optical lattices and their experimental characteristics are discussed. Chapter 3 deals with two-components Fermi gases in optical lattices and the paired states in lattices. In chapter 4 three-component Fermi gases with and without a harmonic trap are explored, and the pairing mechanisms are studied. In this chapter, we also discuss three-component Fermi gases in optical lattices. Chapter 5 devoted to the higher order correlations, and what they can tell about the paired states. Chapter 6 concludes the thesis.
Resumo:
This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.
Resumo:
In this thesis I examine one commonly used class of methods for the analytic approximation of cellular automata, the so-called local cluster approximations. This class subsumes the well known mean-field and pair approximations, as well as higher order generalizations of these. While a straightforward method known as Bayesian extension exists for constructing cluster approximations of arbitrary order on one-dimensional lattices (and certain other cases), for higher-dimensional systems the construction of approximations beyond the pair level becomes more complicated due to the presence of loops. In this thesis I describe the one-dimensional construction as well as a number of approximations suggested for higher-dimensional lattices, comparing them against a number of consistency criteria that such approximations could be expected to satisfy. I also outline a general variational principle for constructing consistent cluster approximations of arbitrary order with minimal bias, and show that the one-dimensional construction indeed satisfies this principle. Finally, I apply this variational principle to derive a novel consistent expression for symmetric three cell cluster frequencies as estimated from pair frequencies, and use this expression to construct a quantitatively improved pair approximation of the well-known lattice contact process on a hexagonal lattice.
Resumo:
The impact of Greek-Egyptian bilingualism on language use and linguistic competence is the key issue in this dissertation. The language use in a corpus of 148 Greek notarial contracts is analyzed on phonological, morphological and syntactic levels. The texts were written by bilingual notaries (agoranomoi) in Upper Egypt in the later Hellenistic period. They present, for the most part, very good administrative Greek. On the other hand, their language contains variation and idiosyncrasies that were earlier condemned as ungrammatical and bad Greek, and were not subjected to closer analysis. In order to reach plausible explanations for those phenomena, a thorough research into the sociohistorical and linguistic context was needed before the linguistic analysis. The general linguistic landscape, the population pattern and the status and frequency of Greek literacy in Ptolemaic Egypt in general, and in Upper Egypt in particular, are presented. Through a detailed examination of the notaries themselves (their names, families and handwriting), it became evident that there were one to three persons at the notarial office writing under the signature of one notary. Often the documents under one notary's name were written in the same hand. We get, therefore, exceptionally close to studying idiolects in written material from antiquity. The qualitative linguistic analysis revealed that the notaries made relatively few orthographic mistakes that reflect the ongoing phonological changes and they mastered the morphological forms. The problems arose at the syntactic level, for example, with the pattern of agreement between the noun groups or a noun with its modifiers. The significant structural differences between Greek and Egyptian can be behind the innovative strategies used by some of the notaries. Moreover, certain syntactic structures were clearly transferred from the notaries first language, Egyptian. This is obvious in the relative clause structure. Transfer can be found in other structures, as well, although, we must not forget the influence of parallel Greek structures. Sometimes these can act simultaneously. The interesting linguistic strategies and transfer features come mostly from the hand of one notary, Hermias. Some other notaries show similar patterns, for example, Hermias' cousin, Ammonios. Hermias' texts reveal that he probably spoke Greek more than his predecessors. It is possible to conclude, then, that the notaries of the later generations were more fluently bilingual; their two languages were partly integrated in their minds as an interlanguage combining elements from both languages. The earlier notaries had the two languages functionally separated and they followed the standardized contract formulae more rigidly.