933 resultados para Weak focus of third order
Resumo:
Includes bibliography
Resumo:
Objectives: The use of noninvasive cortical electrical stimulation with weak currents has significantly increased in basic and clinical human studies. Initial, preliminary studies with this technique have shown encouraging results; however, the safety and tolerability of this method of brain stimulation have not been sufficiently explored yet. The purpose of our study was to assess the effects of direct current (DC) and alternating current (AC) stimulation at different intensities in order to measure their effects on cognition, mood, and electroencephalogram. Methods: Eighty-two healthy, right-handed subjects received active and sham stimulation in a randomized order. We conducted 164 ninety-minute sessions of electrical stimulation in 4 different protocols to assess safety of (1) anodal DC of the dorsolateral prefrontal cortex (DLPFC); (2) cathodal DC of the DLPFC; (3) intermittent anodal DC of the DLPFC and; (4) AC on the zygomatic process. We used weak currents of 1 to 2 mA (for DC experiments) or 0.1 to 0.2 mA (for AC experiment). Results: We found no significant changes in electroencephalogram, cognition, mood, and pain between groups and a low prevalence of mild adverse effects (0.11% and 0.08% in the active and sham stimulation groups, respectively), mainly, sleepiness and mild headache that were equally distributed between groups. Conclusions: Here, we show no neurophysiological or behavioral signs that transcranial DC stimulation or AC stimulation with weak currents induce deleterious changes when comparing active and sham groups. This study provides therefore additional information for researchers and ethics committees, adding important results to the safety pool of studies assessing the effects of cortical stimulation using weak electrical currents. Further studies in patients with neuropsychiatric disorders are warranted.
Resumo:
Forensic age estimation is an important element of anthropological research, as it produces one of the primary sources of data that researchers use to establish the identity of a person living or the identity of unknown bodily remains. The aim of this study was to determine if the chronology of third molar mineralization could be an accurate indicator of estimated age in a sample Brazilian population. If so, mineralization could determine the probability of an individual being 18 years or older. The study evaluated 407 panoramic radiographs of males and females from the past 5 years in order to assess the mineralization status of the mandibular third molars. The evaluation was carried out using an adaptation of Demirjian's system. The results indicated a strong correlation between chronological age and the mineralization of the mandibular third molars. The results indicated that modern Brazilian generation tends to demonstrate an earlier mandibular third molar mineralization than older Brazilian generation and people of other nationalities. Males reached developmental stages slightly earlier than females, but statistically significant differences between the sex were not found. The probability that an individual with third molar mineralization stage H had reached an age of 18 years or older was 96.8-98.6% for males and females, respectively. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
Resumo:
A sample scanning confocal optical microscope (SCOM) was designed and constructed in order to perform local measurements of fluorescence, light scattering and Raman scattering. This instrument allows to measure time resolved fluorescence, Raman scattering and light scattering from the same diffraction limited spot. Fluorescence from single molecules and light scattering from metallic nanoparticles can be studied. First, the electric field distribution in the focus of the SCOM was modelled. This enables the design of illumination modes for different purposes, such as the determination of the three-dimensional orientation of single chromophores. Second, a method for the calculation of the de-excitation rates of a chromophore was presented. This permits to compare different detection schemes and experimental geometries in order to optimize the collection of fluorescence photons. Both methods were combined to calculate the SCOM fluorescence signal of a chromophore in a general layered system. The fluorescence excitation and emission of single molecules through a thin gold film was investigated experimentally and modelled. It was demonstrated that, due to the mediation of surface plasmons, single molecule fluorescence near a thin gold film can be excited and detected with an epi-illumination scheme through the film. Single molecule fluorescence as close as 15nm to the gold film was studied in this manner. The fluorescence dynamics (fluorescence blinking and excited state lifetime) of single molecules was studied in the presence and in the absence of a nearby gold film in order to investigate the influence of the metal on the electronic transition rates. The trace-histogram and the autocorrelation methods for the analysis of single molecule fluorescence blinking were presented and compared via the analysis of Monte-Carlo simulated data. The nearby gold influences the total decay rate in agreement to theory. The gold presence produced no influence on the ISC rate from the excited state to the triplet but increased by a factor of 2 the transition rate from the triplet to the singlet ground state. The photoluminescence blinking of Zn0.42Cd0.58Se QDs on glass and ITO substrates was investigated experimentally as a function of the excitation power (P) and modelled via Monte-Carlo simulations. At low P, it was observed that the probability of a certain on- or off-time follows a negative power-law with exponent near to 1.6. As P increased, the on-time fraction reduced on both substrates whereas the off-times did not change. A weak residual memory effect between consecutive on-times and consecutive off-times was observed but not between an on-time and the adjacent off-time. All of this suggests the presence of two independent mechanisms governing the lifetimes of the on- and off-states. The simulated data showed Poisson-distributed off- and on-intensities, demonstrating that the observed non-Poissonian on-intensity distribution of the QDs is not a product of the underlying power-law probability and that the blinking of QDs occurs between a non-emitting off-state and a distribution of emitting on-states with different intensities. All the experimentally observed photo-induced effects could be accounted for by introducing a characteristic lifetime tPI of the on-state in the simulations. The QDs on glass presented a tPI proportional to P-1 suggesting the presence of a one-photon process. Light scattering images and spectra of colloidal and C-shaped gold nano-particles were acquired. The minimum size of a metallic scatterer detectable with the SCOM lies around 20 nm.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
The main focus of this paper is on hydrodynamic modelling of a semisubmersible platform (which can support a 1.5MW wind turbine and is composed by three buoyant columns connected by bracings) with especial emphasis on the estimation of the wave drift components and their effects on the design of the mooring system. Indeed, with natural periods of drift around 60 seconds, accurate computation of the low-frequency second-order components is not a straightforward task. As methods usually adopted when dealing with the slow-drifts of deep-water moored systems, such as Newman?s approximation, have their errors increased by the relatively low resonant periods, and as the effects of depth cannot be ignored, the wave diffraction analysis must be based on full Quadratic Transfer Functions (QTF) computations. A discussion on the numerical aspects of performing such computations is presented, making use of the second-order module available with the seakeeping software WAMIT®. Finally, the paper also provides a preliminary verification of the accuracy of the numerical predictions based on the results obtained in a series of model tests with the structure fixed in bichromatic waves.
Resumo:
This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.
Resumo:
The crystal structure of raite was solved and refined from data collected at Beamline Insertion Device 13 at the European Synchrotron Radiation Facility, using a 3 × 3 × 65 μm single crystal. The refined lattice constants of the monoclinic unit cell are a = 15.1(1) Å; b = 17.6(1) Å; c = 5.290(4) Å; β = 100.5(2)°; space group C2/m. The structure, including all reflections, refined to a final R = 0.07. Raite occurs in hyperalkaline rocks from the Kola peninsula, Russia. The structure consists of alternating layers of a hexagonal chicken-wire pattern of 6-membered SiO4 rings. Tetrahedral apices of a chain of Si six-rings, parallel to the c-axis, alternate in pointing up and down. Two six-ring Si layers are connected by edge-sharing octahedral bands of Na+ and Mn3+ also parallel to c. The band consists of the alternation of finite Mn–Mn and Na–Mn–Na chains. As a consequence of the misfit between octahedral and tetrahedral elements, regions of the Si–O layers are arched and form one-dimensional channels bounded by 12 Si tetrahedra and 2 Na octahedra. The channels along the short c-axis in raite are filled by isolated Na(OH,H2O)6 octahedra. The distorted octahedrally coordinated Ti4+ also resides in the channel and provides the weak linkage of these isolated Na octahedra and the mixed octahedral tetrahedral framework. Raite is structurally related to intersilite, palygorskite, sepiolite, and amphibole.
Resumo:
From the Introduction. In the USA, the debate is still ongoing as to whether and to what extent the Supreme Court could or should refer to foreign precedent, in particular in relation to constitutional matters such as the death penalty.1 In the EU, in particular the recent Kadi case of 20082 has triggered much controversy,3 thereby highlighting the opposite angle to a similar discussion. The focus of attention in Europe is namely to what extent the European Court of Justice (hereafter “ECJ”) could lawfully and rightfully refuse to plainly ‘surrender’ or to subordinate the EC legal system to UN law and obligations when dealing with human rights issues. This question becomes all the more pertinent in view of the fact that in the past the ECJ has been rather receptive and constructive in forging interconnectivity between the EC legal order and international law developments. A bench mark in that respect was undoubtedly the Racke case of 1998,4 where the ECJ spelled out the necessity for the EC to respect international law with direct reference to a ruling of the International Court of Justice. This judgment which was rendered 10 years earlier than Kadi equally concerned EC/EU economic sanctions taken in implementation of UN Security Council Resolutions. A major question is therefore whether it is at all possible, and if so to determine how, to reconcile those apparently conflicting judgments.
Resumo:
Translation of Combattimento spirituale.
Resumo:
Flying foxes have been the focus of research into three newly described viruses from the order Mononegavirales, namely Hendra virus (HeV), Menangle virus and Australian Bat Lyssavirus (ABL). Early investigations indicate that flying foxes are the reservoir host for these viruses. In 1994, two outbreaks of a new zoonotic disease affecting horses and humans occurred in Queensland. The virus which was found to be responsible was called equine morbillivirus (EMV) and has since been renamed HeV. Investigation into the reservoir of HeV has produced evidence that antibodies capable of neutralising HeV have only been detected in flying foxes. Over 20% of flying foxes in eastern Australia have been identified as being seropositive. Additionally six species of flying foxes in Papua New Guinea have tested positive for antibodies to HeV. In 1996 a virus from the family Paramyxoviridae was isolated from the uterine fluid of a female flying fox. Sequencing of 10 000 of the 18 000 base pairs (bp) has shown that the sequence is identical to the HeV sequence. As part of investigations into HeV, a virus was isolated from a juvenile flying fox which presented with neurological signs in 1996. This virus was characterised as belonging to the family Rhabdoviridae, and was named ABL. Since then four flying fox species and one insectivorous species have tested positive for ABL. The third virus to be detected in flying foxes is Menangle virus, belonging to the family Paramyxoviridae. This virus was responsible for a zoonotic disease affecting pigs and humans in New South Wales in 1997. Antibodies capable of neutralising Menangle virus, were detected in flying foxes. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
This thesis is focussed on the role differentiationhypothesis as it relates to small groups (Bales, 1958). The hypothesis is systematically examined, both conceptually and empirically, in the light of the Equilibrium Hypothesis (Bales, 1953) and the Negotiated Order Theory of leadership (e.g. Hosking, 1988). Chapter 1 sketches in a context for the research,which was stimulated by attempts during the 60s and 70s to organise small groups without leaders (the leaderless group, based on isocratic principles). Chapter 2 gives a conceptual and developmental overview of Bales' work, concentrating on the Equilibrium Hypothesis. It is argued that Bales' conceptual approach, if developed, can potentially integrate the disparate small groups and leadership literatures. Chapters 3 and 4 examine the concepts `group', `leader' and `leadership' in terms of the Negotiated Order perspective. In chapter 3 it is argued that two aspects of the concept group need to be taken separately into account; physical attributes and social psychological aspects (the metaphysical glue). It is further argued that a collection of people becomes a group only when they begin to establish a shared sense of social order. In chapter 4 it is argued that leadership is best viewed as a process of negotiation between those who influence and those who are influenced, in the context of shared values about means and ends. It is further argued that leadership is the process by which a shared sense of social order is established and maintained, thus linking the concepts `leadership' and `group' in a single formulation. The correspondences with Bales' approach are discussed at the end of the chapter. Chapters 5 to 8 present a detailed critical description and evaluation of the empirical work which claims to show role differentiation or test the hypothesis, both Bales original work and subsequent studies. It is argued here, that the measurement and analytical procedures adopted by Bales and others, in particular the use of simple means as summaries of group structures, are fundamentally flawed, and that role differentiation in relation to particular identifiable groups has not been demonstrated clearly anywhere in the literature. Chapters 9 to 13 present the empirical work conducted for the thesis. 18 small groups are examined systematically for evidence of role differentiation using an approach based on early sociometry (Moreno, 1934). The results suggest that role differentiation, as described by Bales, does not occur as often as is implied in the literature, and not equivocally in any case. In particular structures derived from Liking are typically distributed or weak. This suggests that one of Bales' principal findings, that Liking varies independently of his other main dimensions, is the product of statistical artifact. Chapter 14 presents a general summary of results and presents some considerations about future research.
Resumo:
This article discusses property rights, corporate governance frameworks and privatisation outcomes in the Central–Eastern Europe and Central Asia (CEECA) region. We argue that while CEECA still suffers from deficient ‘higher order’ institutions, this is not attracting sufficient attention from international institutions like EBRD and the World Bank, which focus on ‘lower order’ indicators. We discuss factors that may alleviate the negative impact of the weakness in institutional environment and argue for the pecking order of privatisation, where equivalent privatisation is given a priority but speed is not compromised.
Resumo:
From a sociocultural perspective, individuals learn best from contextualized experiences. In preservice teacher education, contextualized experiences include authentic literacy experiences, which include a real reader and writer and replicate real life communication. To be prepared to teach well, preservice teachers need to gain literacy content knowledge and possess reading maturity. The purpose of this study was to examine the effect of authentic literacy experiences as Book Buddies with Hispanic fourth graders on preservice teachers’ literacy content knowledge and reading maturity. The study was a pretest/posttest design conducted over 12 weeks. Preservice teacher participants, the focus of the study, were elementary education majors taking the third of four required reading courses in non-probabilistic convenience groups, 43 (n = 33 experimental, n = 10 comparison) Elementary Education majors. The Survey of Preservice Teachers’ Knowledge of Teaching and Technology (SPTKTT), specifically designed for preservice teachers majoring in elementary or early childhood education and the Reading Maturity Survey (RMS) were used in this study. Preservice teachers chose either the experimental or comparison group based on the opportunity to earn extra credit points (experimental = 30 points, comparison = 15). After exchanging introductory letters preservice teachers and Hispanic fourth graders each read four books. After reading each book preservice teachers wrote letters to their student asking higher order thinking questions. Preservice teachers received scanned copies of their student’s unedited letters via email which enabled them to see their student’s authentic answers and writing levels. A series of analyses of covariance were used to determine whether there were significant differences in the dependent variables between the experimental and comparison groups. This quasi-experimental study tested two hypotheses. Using the appropriate pretest scores as covariates for adjusting the posttest means of the subcategory Literacy Content Knowledge (LCK), of the SPTKTT and the RMS, the mean adjusted posttest scores from the experimental group and comparison group were compared. No significant differences were found on the LCK dependent variable using the .05 level of significance, which may be due to Type II error caused by the small sample size. Significant differences were found on RMS using the .05 level of significance.