896 resultados para False consciousness
Resumo:
This dissertation is an assessment of the status of odontocetes in Hawaiian waters focussing on O´ahu. The work builds on available literature, and on data collected by the author and by others in Hawaiian waters. Abundance and distribution patterns of odontocetes were derived from stranding and aerial survey data. A stranding network operated by the National Marine Fisheries Service, Pacific Area Office collected 187 stranding reports throughout the main Hawaiian Islands between 1937 and 2002. These reports included 16 odontocete species. Number of stranding reports increased over time and was highest on O´ahu. Strandings occurred throughout the year. The difference in number of strandings per month was not significant. Fifteen of the 16 species reported in the stranding record for the main Hawaiian Islands were also reported by aerial survey studies of the area between 1993 and 1998. Only 7 of the species reported were detected during aerial transects around O′ahu between 1998 and 2000. Based on the stranding record, Kogia sp., melon-headed whales, striped dolphins and dwarf killer whale appear to be more common than suggested by aerial surveys. Conversely, pilot whales and bottlenose dolphins were more common, according to aerial surveys, than predicted by the stranding data. Aerial surveys of waters between 0 and 500m around the Island of O′ahu showed that the most abundant species by frequency of occurrence was the pilot whale (30% of sightings), followed by the spinner (16%) and bottlenose dolphin (14%). Because of small sample size, abundance estimates for odontocetes have a high level of uncertainty. The unavailability of a correction factor for g(0)<1, and the reduced visibility below the aircraft further reduced accuracy and increased the inherent underestimation in the data. The most abundant species according to distance sampling estimates were spotted dolphins, pilot whales, false killer whales and spinner dolphins. A natural factor shaping the ecology of odontocete populations is predation pressure both by other odontocetes and, more frequently, by sharks. An account of predation by a tiger shark on a spotted dolphin near Penguin Banks is used as an example of the potential mechanisms of predation by sharks on odontocetes.
Resumo:
[EN] This paper examines how the female characters in Greek novels have recourse to false speech. Based on an analysis of female speech in Attic tragedy, which was one of the literary genres that exerted the greatest influence on speech parts of the novels, a study is conducted to find out which characters in the novel employ false speech and their purpose in doing so. Two types of false speech were identified: the defensive one, used by the female protagonists or by secondary characters of similar social and ideological status, and the offensive one, used by characters of lower rank, and blameworthy morality within the ideological love's frameword publicized through the novel.
Resumo:
The foundation of Habermas's argument, a leading critical theorist, lies in the unequal distribution of wealth across society. He states that in an advanced capitalist society, the possibility of a crisis has shifted from the economic and political spheres to the legitimation system. Legitimation crises increase the more government intervenes into the economy (market) and the "simultaneous political enfranchisement of almost the entire adult population" (Holub, 1991, p. 88). The reason for this increase is because policymakers in advanced capitalist democracies are caught between conflicting imperatives: they are expected to serve the interests of their nation as a whole, but they must prop up an economic system that benefits the wealthy at the expense of most workers and the environment. Habermas argues that the driving force in history is an expectation, built into the nature of language, that norms, laws, and institutions will serve the interests of the entire population and not just those of a special group. In his view, policy makers in capitalist societies are having to fend off this expectation by simultaneously correcting some of the inequities of the market, denying that they have control over people's economic circumstances, and defending the market as an equitable allocator of income. (deHaven-Smith, 1988, p. 14). Critical theory suggests that this contradiction will be reflected in Everglades policy by communicative narratives that suppress and conceal tensions between environmental and economic priorities. Habermas’ Legitimation Crisis states that political actors use various symbols, ideologies, narratives, and language to engage the public and avoid a legitimation crisis. These influences not only manipulate the general population into desiring what has been manufactured for them, but also leave them feeling unfulfilled and alienated. Also known as false reconciliation, the public's view of society as rational, and "conductive to human freedom and happiness" is altered to become deeply irrational and an obstacle to the desired freedom and happiness (Finlayson, 2005, p. 5). These obstacles and irrationalities give rise to potential crises in the society. Government's increasing involvement in Everglades under advanced capitalism leads to Habermas's four crises: economic/environmental, rationality, legitimation, and motivation. These crises are occurring simultaneously, work in conjunction with each other, and arise when a principle of organization is challenged by increased production needs (deHaven-Smith, 1988). Habermas states that governments use narratives in an attempt to rationalize, legitimize, obscure, and conceal its actions under advanced capitalism. Although there have been many narratives told throughout the history of the Everglades (such as the Everglades was a wilderness that was valued as a wasteland in its natural state), the most recent narrative, “Everglades Restoration”, is the focus of this paper.(PDF contains 4 pages)
Resumo:
[EUS] Gizartean dauden adimen gaitasun handiko haurrak antzematea ez da erraza. Hori dela eta, ikasle hauen ezaugarriak antzematea izan da lan honen muina. Egile eta teoria ezberdinetan oinarrituta, bi ekarpen didaktiko aurrera eraman dira. Ikasle hauek eta hauen ezaugarriak ezagutzera ematen dituen dokumentala burutu da, non kolektibo honen inguruan sinesten diren mitoak eta uste okerrak desmitifikatzen diren. Lanaren bigarren ekarpena, haurrak identifikatzerako orduan familia eta irakasleentzako lagungarriak izango diren behaketa- tresnen zerrenda izan da, edozein ingurunean egonda ere haur hauek antzematen lagunduko duena.
Resumo:
Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.
In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.
We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.
Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.
Resumo:
Within the microcosm of information theory, I explore what it means for a system to be functionally irreducible. This is operationalized as quantifying the extent to which cooperative or “synergistic” effects enable random variables X1, ... , Xn to predict (have mutual information about) a single target random variable Y . In Chapter 1, we introduce the problem with some emblematic examples. In Chapter 2, we show how six different measures from the existing literature fail to quantify this notion of synergistic mutual information. In Chapter 3 we take a step towards a measure of synergy which yields the first nontrivial lowerbound on synergistic mutual information. In Chapter 4, we find that synergy is but the weakest notion of a broader concept of irreducibility. In Chapter 5, we apply our results from Chapters 3 and 4 towards grounding Giulio Tononi’s ambitious φ measure which attempts to quantify the magnitude of consciousness experience.
Resumo:
The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.
Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.
The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.
Resumo:
Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.
This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
Essa dissertação pretende deter-se sobre três pequenos e específicos textos constantes da obra Ou... Ou, do dinamarquês Sören Aybe Kierkegaard (1813-1855). Os dois primeiros textos são Os estados eróticos imediatos e Diário do Sedutor, e estão entre os textos da primeira parte do livro supracitado; o terceiro texto intitula-se O equilíbrio entre o estético e o ético na formação da personalidade e pertence à segunda parte do mesmo livro. Partindo de uma explicitação detalhada do conteúdo destes textos pretende-se pensar a questão dos estádios kierkegaardianos (estético, ético e religioso) e a forma como estes se relacionam com a existência e a consciência. No âmbito da existência concreta, a questão da consciência aparece para o filósofo dinamarquês a partir da explanação destas três dimensões existenciais, as quais se constituem em sintonia com disposições afetivas e também com modos materiais de viver e agir, detidamente descritos pela existência cotidiana de personagens. Desprovida, inicialmente, de qualquer determinação, a consciência vai se concretizando a partir de sua existência sensível, que guarda constantemente diferentes momentos ou possibilidades próprias. A tese fundamental a ser discutida, neste contexto, é a de que esses momentos existenciais não podem ser considerados de forma evolutiva, mas precisam ser tomados como possibilidades ou formas de vida, com sua positividade e seus riscos. O trabalho pretende mostrar de que forma as leituras correntes da filosofia de Kierkegaard tendem a enaltecer o aspecto ético e moral dos estádios, acabando por ignorar a dimensão mais originária do ser, qual seja, a dimensão da disposição imediata que, ao ser desprezada, abre um flanco entre o homem e ele mesmo.
Resumo:
Além de muito freqüente, a desnutrição associa-se a morbi/mortalidade em pacientes com doenças hepáticas crônicas. A avaliação do estado nutricional em hepatopatas é difícil pela sobrecarga hídrica e pela alteração na síntese protéica, fatores que alteram os parâmetros tradicionalmente usados na avaliação nutricional. Os objetivos são:a)avaliar o estado nutricional, através da AGS, antropometria, do escore de Mendenhall e da combinação de todos os instrumentos, em pacientes com doença hepática crônica; b)correlacionar o estado nutricional com a gravidade de doença hepática crônica; c)determinar a contribuição da dinamometria do aperto de mão para a avaliação do estado nutricional. Foram incluídos 305 pacientes portadores de doenças hepáticas crônicas, com idade de 18-80 anos, atendidos no ambulatório de doenças hepatobiliares do Hospital Universitátio Pedro Ernesto. A gravidade da doença hepática foi avaliada pela classificação de Child-Pugh e escore de Meld. Foram aferidos parâmetros antropométricos (peso, altura, índice de massa corporal, prega cutânea triciptal, circunferência do braço, circunferência muscular do braço), parâmetros bioquímicos (albumina e contagem total de linfócitos), Avaliação Global Subjetiva, escore de Mendenhall e força do aperto de mão pela dinamometria. Os valores da porcentagem de adequação dos parâmetros foram utilizados para a classificação da desnutrição. Consideramos todos os pacientes com porcentagens de adequação abaixo de 90% como desnutridos. Foi criado o escore risco de desnutrição que se caracterizou pela alteração em qualquer um dos parâmetros da avaliação nutricional. Cerca de 53% dos pacientes eram do sexo masculino, 43% portadores de cirrose hepática, 80% com etiologia viral e média de idade de 54 12 anos. Houve relação estatisticamente significativa entre a classificação funcional da doença hepática e a AGS, o escore de Mendenhall e o de risco de desnutrição. A avaliação isolada da antropometria não se correlacionou com a classificação funcional. Segundo a AGS, a prevalência de desnutrição foi de 10% na hepatopatia não cirrótica, 16% na cirrose compensada e 94% na cirrose descompensada. Segundo o escore de Mendenhall, as cifras foram de 31%, 38% e 56%, respectivamente. Segundo o novo escore, as cifras foram de 52%, 60% e 96%, respectivamente. Embora tenha havido uma redução estatisticamente significativa da força muscular com o agravamento do estado nutricional, não foi possível estabelecer um ponto de corte para os valores da dinamometria. A análise do desempenho do percentual de adequação da força muscular como critério diagnóstico de pacientes sob risco de desnutrição revelou provavelmente 56% de falso-positivos e 24% de falso-negativos. A grande variação na prevalência de desnutrição em pacientes com doença hepática depende do instrumento de avaliação nutricional usado e da classificação funcional da doença hepática. Não surpreendentemente, os escores combinados detectaram as maiores taxas de prevalência de desnutrição. Houve associação significativa entre o estado nutricional e a gravidade da doença hepática. O aumento das taxas de prevalência de desnutrição trazido pela dinamometria ocorreu às custas de resultados falso-positivos.
Resumo:
While synoptic surveys in the optical and at high energies have revealed a rich discovery phase space of slow transients, a similar yield is still awaited in the radio. Majority of the past blind surveys, carried out with radio interferometers, have suffered from a low yield of slow transients, ambiguous transient classifications, and contamination by false positives. The newly-refurbished Karl G. Jansky Array (Jansky VLA) offers wider bandwidths for accurate RFI excision as well as substantially-improved sensitivity and survey speed compared with the old VLA. The Jansky VLA thus eliminates the pitfalls of interferometric transient search by facilitating sensitive, wide-field, and near-real-time radio surveys and enabling a systematic exploration of the dynamic radio sky. This thesis aims at carrying out blind Jansky VLA surveys for characterizing the radio variable and transient sources at frequencies of a few GHz and on timescales between days and years. Through joint radio and optical surveys, the thesis addresses outstanding questions pertaining to the rates of slow radio transients (e.g. radio supernovae, tidal disruption events, binary neutron star mergers, stellar flares, etc.), the false-positive foreground relevant for the radio and optical counterpart searches of gravitational wave sources, and the beaming factor of gamma-ray bursts. The need for rapid processing of the Jansky VLA data and near-real-time radio transient search has enabled the development of state-of-the-art software infrastructure. This thesis has successfully demonstrated the Jansky VLA as a powerful transient search instrument, and it serves as a pathfinder for the transient surveys planned for the SKA-mid pathfinder facilities, viz. ASKAP, MeerKAT, and WSRT/Apertif.
Resumo:
The evaluation and comparison of internal cluster validity indices is a critical problem in the clustering area. The methodology used in most of the evaluations assumes that the clustering algorithms work correctly. We propose an alternative methodology that does not make this often false assumption. We compared 7 internal cluster validity indices with both methodologies and concluded that the results obtained with the proposed methodology are more representative of the actual capabilities of the compared indices.
Resumo:
Este trabalho se propõe analisar a construção da obra de Alexandre Herculano como historiador, que teve início com a publicação de inúmeros artigos no jornal O Panorama e na Revista Universal Lisbonense. Nesses textos, procuramos perceber as reflexões iniciais que levaram a um projeto de maior fôlego intelectual, a História de Portugal, publicada em um momento de emergência das nacionalidades e da formação das consciências nacionais. Nesse sentido, procuramos perceber como Herculano concebeu a sua história analisando sua trajetória como historiador/político em meio às graduais transformações sociais que ocorriam em Portugal à sua época. Assim, propusemo-nos pensar o Alexandre Herculano político em constante diálogo com a conjuntura daquele período, tendo como referência a sua atuação social e a sua intervenção textual no processo então em curso.
Resumo:
The first chapter of this thesis deals with automating data gathering for single cell microfluidic tests. The programs developed saved significant amounts of time with no loss in accuracy. The technology from this chapter was applied to experiments in both Chapters 4 and 5.
The second chapter describes the use of statistical learning to prognose if an anti-angiogenic drug (Bevacizumab) would successfully treat a glioblastoma multiforme tumor. This was conducted by first measuring protein levels from 92 blood samples using the DNA-encoded antibody library platform. This allowed the measure of 35 different proteins per sample, with comparable sensitivity to ELISA. Two statistical learning models were developed in order to predict whether the treatment would succeed. The first, logistic regression, predicted with 85% accuracy and an AUC of 0.901 using a five protein panel. These five proteins were statistically significant predictors and gave insight into the mechanism behind anti-angiogenic success/failure. The second model, an ensemble model of logistic regression, kNN, and random forest, predicted with a slightly higher accuracy of 87%.
The third chapter details the development of a photocleavable conjugate that multiplexed cell surface detection in microfluidic devices. The method successfully detected streptavidin on coated beads with 92% positive predictive rate. Furthermore, chambers with 0, 1, 2, and 3+ beads were statistically distinguishable. The method was then used to detect CD3 on Jurkat T cells, yielding a positive predictive rate of 49% and false positive rate of 0%.
The fourth chapter talks about the use of measuring T cell polyfunctionality in order to predict whether a patient will succeed an adoptive T cells transfer therapy. In 15 patients, we measured 10 proteins from individual T cells (~300 cells per patient). The polyfunctional strength index was calculated, which was then correlated with the patient's progress free survival (PFS) time. 52 other parameters measured in the single cell test were correlated with the PFS. No statistical correlator has been determined, however, and more data is necessary to reach a conclusion.
Finally, the fifth chapter talks about the interactions between T cells and how that affects their protein secretion. It was observed that T cells in direct contact selectively enhance their protein secretion, in some cases by over 5 fold. This occurred for Granzyme B, Perforin, CCL4, TNFa, and IFNg. IL- 10 was shown to decrease slightly upon contact. This phenomenon held true for T cells from all patients tested (n=8). Using single cell data, the theoretical protein secretion frequency was calculated for two cells and then compared to the observed rate of secretion for both two cells not in contact, and two cells in contact. In over 90% of cases, the theoretical protein secretion rate matched that of two cells not in contact.