956 resultados para False confession
Resumo:
[EN]This work will focus on some aspects of descriptive names. The New Theory of Reference, in line with Kripke, takes descriptive names to be proper names. I will argue in this paper that descriptive names and certain theory in reference to them, even when it disagrees with the New Theory of Reference, can shed light on our understanding of (some) non-existence statements. I define the concept of descriptive name for hypothesised object (DNHO). My thesis being that DNHOs are, as I will specify, descriptions: a proposition expressed by the utterance ‘n is F’, where ‘n’ is a DNHO, is not singular at all; it is a descriptive proposition. To sum up, concerning proper names, the truth lies closer to the New Theory of Reference, but descriptivism is not altogether false. As for DNHOs descriptivism is, in some cases, the right fit.
Resumo:
7 p.
Resumo:
[ES] El presente artículo pretende aportar nuevos datos sobre la ganadería en los territorios vascos durante la Edad Moderna. Ante todo se trata de romper con falsos paradigmas que se han venido repitiendo durante largo tiempo, aportando datos inéditos. Los clásicos de la historiografía vasca siempre han recalcado el carácter rural y agrario de la economía vasca; a pesar de ello, actividades como la ganadería jamás han ocupado un espacio primordial como objeto de estudio entre los historiadores, que en muchos casos han aceptado las teorías de etnógrafos y antropólogos sin contrastarlas. La ganadería en tierras vascas siguió modelos cantábricos, que ya vienen siendo estudiados desde algunas décadas por los historiadores gallegos, asturianos o cántabros; escuelas que han establecido nuevas metodologías para el estudio de la ganadería, las cabañas predominantes, el régimen de explotación, su impacto económico, etc., y cuyo ejemplo desgraciadamente no ha sido secundado en el caso vasco.
Resumo:
The Accelerating Moment Release (AMR) preceding earthquakes with magnitude above 5 in Australia that occurred during the last 20 years was analyzed to test the Critical Point Hypothesis. Twelve earthquakes in the catalog were chosen based on a criterion for the number of nearby events. Results show that seven sequences with numerous events recorded leading up to the main earthquake exhibited accelerating moment release. Two occurred near in time and space to other earthquakes preceded by AM R. The remaining three sequences had very few events in the catalog so the lack of AMR detected in the analysis may be related to catalog incompleteness. Spatio-temporal scanning of AMR parameters shows that 80% of the areas in which AMR occurred experienced large events. In areas of similar background seismicity with no large events, 10 out of 12 cases exhibit no AMR, and two others are false alarms where AMR was observed but no large event followed. The relationship between AMR and Load-Unload Response Ratio (LURR) was studied. Both methods predict similar critical region sizes, however, the critical point time using AMR is slightly earlier than the time of the critical point LURR anomaly.
Resumo:
Dr. Charles M. Breder participated on the 1934 expedition of the Atlantis from Woods Hole, Massachusetts to Panama and back and kept a field diary of daily activities. The Atlantis expedition of 1934, led by Prof. A. E. Parr, was a milestone in the history of scientific discovery in the Sargasso Sea and the West Indies. Although naturalists had visited the Sargasso Sea for many years, the Atlantis voyage was the first attempt to investigate in detailed quantitative manner biological problems about this varying, intermittent ‘false’ bottom of living, floating plants and associated fauna. In addition to Dr. Breder, the party also consisted of Dr. Alexander Forbes, Harvard University and Trustee of the Woods Hole Oceanographic Institution (WHOI); T. S. Greenwood, WHOI hydrographer; M. D. Burkenroad, Yale University’s Bingham Laboratory, carcinology and Sargasso epizoa; M. Bishop, Peabody Museum of Natural History, Zoology Dept., collections and preparations and H. Sears, WHOI ichthyologist. The itinerary included the following waypoints: Woods Hole, the Bermudas, Turks Islands, Kingston, Colon, along the Mosquito Bank off of Nicaragua, off the north coast of Jamaica, along the south coast of Cuba, Bartlett Deep, to off the Isle of Pines, through the Yucatan Channel, off Havana, off Key West, to Miami, to New York City, and then the return to Woods Hole. During the expedition, Breder collected rare and little-known flying fish species and developed a method for hatching and growing flying fish larvae. (PDF contains 48 pages)
Resumo:
In April 2005, a SHOALS 1000T LIDAR system was used as an efficient alternative for safely acquiring data to describe the existing conditions of nearshore bathymetry and the intertidal zone over an approximately 40.7 km2 (11.8 nm2) portion of hazardous coastline within the Olympic Coast National Marine Sanctuary (OCNMS). Data were logged from 1,593 km (860 nm) of track lines in just over 21 hours of flight time. Several islands and offshore rocks were also surveyed, and over 24,000 geo-referenced digital still photos were captured to assist with data cleaning and QA/QC. The 1 kHz bathymetry laser obtained a maximum water depth of 22.2 meters. Floating kelp beds, breaking surf lines and turbid water were all challenges to the survey. Although sea state was favorable for this time of the year, recent heavy rainfall and a persistent low-lying layer of fog reduced acquisition productivity. The existence of a completed VDatum model covering this same geographic region permitted the LIDAR data to be vertically transformed and merged with existing shallow water multibeam data and referenced to the mean lower low water (MLLW) tidal datum. Analysis of a multibeam bathymetry-LIDAR difference surface containing over 44,000 samples indicated surface deviations from –24.3 to 8.48 meters, with a mean difference of –0.967 meters, and standard deviation of 1.762 meters. Errors in data cleaning and false detections due to interference from surf, kelp, and turbidity likely account for the larger surface separations, while the remaining general surface difference trend could partially be attributed to a more dense data set, and shoal-biased cleaning, binning and gridding associated with the multibeam data for maintaining conservative least depths important for charting dangers to navigation. (PDF contains 27 pages.)
Resumo:
This dissertation is an assessment of the status of odontocetes in Hawaiian waters focussing on O´ahu. The work builds on available literature, and on data collected by the author and by others in Hawaiian waters. Abundance and distribution patterns of odontocetes were derived from stranding and aerial survey data. A stranding network operated by the National Marine Fisheries Service, Pacific Area Office collected 187 stranding reports throughout the main Hawaiian Islands between 1937 and 2002. These reports included 16 odontocete species. Number of stranding reports increased over time and was highest on O´ahu. Strandings occurred throughout the year. The difference in number of strandings per month was not significant. Fifteen of the 16 species reported in the stranding record for the main Hawaiian Islands were also reported by aerial survey studies of the area between 1993 and 1998. Only 7 of the species reported were detected during aerial transects around O′ahu between 1998 and 2000. Based on the stranding record, Kogia sp., melon-headed whales, striped dolphins and dwarf killer whale appear to be more common than suggested by aerial surveys. Conversely, pilot whales and bottlenose dolphins were more common, according to aerial surveys, than predicted by the stranding data. Aerial surveys of waters between 0 and 500m around the Island of O′ahu showed that the most abundant species by frequency of occurrence was the pilot whale (30% of sightings), followed by the spinner (16%) and bottlenose dolphin (14%). Because of small sample size, abundance estimates for odontocetes have a high level of uncertainty. The unavailability of a correction factor for g(0)<1, and the reduced visibility below the aircraft further reduced accuracy and increased the inherent underestimation in the data. The most abundant species according to distance sampling estimates were spotted dolphins, pilot whales, false killer whales and spinner dolphins. A natural factor shaping the ecology of odontocete populations is predation pressure both by other odontocetes and, more frequently, by sharks. An account of predation by a tiger shark on a spotted dolphin near Penguin Banks is used as an example of the potential mechanisms of predation by sharks on odontocetes.
Resumo:
[EN] This paper examines how the female characters in Greek novels have recourse to false speech. Based on an analysis of female speech in Attic tragedy, which was one of the literary genres that exerted the greatest influence on speech parts of the novels, a study is conducted to find out which characters in the novel employ false speech and their purpose in doing so. Two types of false speech were identified: the defensive one, used by the female protagonists or by secondary characters of similar social and ideological status, and the offensive one, used by characters of lower rank, and blameworthy morality within the ideological love's frameword publicized through the novel.
Resumo:
The foundation of Habermas's argument, a leading critical theorist, lies in the unequal distribution of wealth across society. He states that in an advanced capitalist society, the possibility of a crisis has shifted from the economic and political spheres to the legitimation system. Legitimation crises increase the more government intervenes into the economy (market) and the "simultaneous political enfranchisement of almost the entire adult population" (Holub, 1991, p. 88). The reason for this increase is because policymakers in advanced capitalist democracies are caught between conflicting imperatives: they are expected to serve the interests of their nation as a whole, but they must prop up an economic system that benefits the wealthy at the expense of most workers and the environment. Habermas argues that the driving force in history is an expectation, built into the nature of language, that norms, laws, and institutions will serve the interests of the entire population and not just those of a special group. In his view, policy makers in capitalist societies are having to fend off this expectation by simultaneously correcting some of the inequities of the market, denying that they have control over people's economic circumstances, and defending the market as an equitable allocator of income. (deHaven-Smith, 1988, p. 14). Critical theory suggests that this contradiction will be reflected in Everglades policy by communicative narratives that suppress and conceal tensions between environmental and economic priorities. Habermas’ Legitimation Crisis states that political actors use various symbols, ideologies, narratives, and language to engage the public and avoid a legitimation crisis. These influences not only manipulate the general population into desiring what has been manufactured for them, but also leave them feeling unfulfilled and alienated. Also known as false reconciliation, the public's view of society as rational, and "conductive to human freedom and happiness" is altered to become deeply irrational and an obstacle to the desired freedom and happiness (Finlayson, 2005, p. 5). These obstacles and irrationalities give rise to potential crises in the society. Government's increasing involvement in Everglades under advanced capitalism leads to Habermas's four crises: economic/environmental, rationality, legitimation, and motivation. These crises are occurring simultaneously, work in conjunction with each other, and arise when a principle of organization is challenged by increased production needs (deHaven-Smith, 1988). Habermas states that governments use narratives in an attempt to rationalize, legitimize, obscure, and conceal its actions under advanced capitalism. Although there have been many narratives told throughout the history of the Everglades (such as the Everglades was a wilderness that was valued as a wasteland in its natural state), the most recent narrative, “Everglades Restoration”, is the focus of this paper.(PDF contains 4 pages)
Resumo:
[EUS] Gizartean dauden adimen gaitasun handiko haurrak antzematea ez da erraza. Hori dela eta, ikasle hauen ezaugarriak antzematea izan da lan honen muina. Egile eta teoria ezberdinetan oinarrituta, bi ekarpen didaktiko aurrera eraman dira. Ikasle hauek eta hauen ezaugarriak ezagutzera ematen dituen dokumentala burutu da, non kolektibo honen inguruan sinesten diren mitoak eta uste okerrak desmitifikatzen diren. Lanaren bigarren ekarpena, haurrak identifikatzerako orduan familia eta irakasleentzako lagungarriak izango diren behaketa- tresnen zerrenda izan da, edozein ingurunean egonda ere haur hauek antzematen lagunduko duena.
Resumo:
Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.
In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.
We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.
Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.
Resumo:
The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.
Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.
The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.
Resumo:
Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.
This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
Além de muito freqüente, a desnutrição associa-se a morbi/mortalidade em pacientes com doenças hepáticas crônicas. A avaliação do estado nutricional em hepatopatas é difícil pela sobrecarga hídrica e pela alteração na síntese protéica, fatores que alteram os parâmetros tradicionalmente usados na avaliação nutricional. Os objetivos são:a)avaliar o estado nutricional, através da AGS, antropometria, do escore de Mendenhall e da combinação de todos os instrumentos, em pacientes com doença hepática crônica; b)correlacionar o estado nutricional com a gravidade de doença hepática crônica; c)determinar a contribuição da dinamometria do aperto de mão para a avaliação do estado nutricional. Foram incluídos 305 pacientes portadores de doenças hepáticas crônicas, com idade de 18-80 anos, atendidos no ambulatório de doenças hepatobiliares do Hospital Universitátio Pedro Ernesto. A gravidade da doença hepática foi avaliada pela classificação de Child-Pugh e escore de Meld. Foram aferidos parâmetros antropométricos (peso, altura, índice de massa corporal, prega cutânea triciptal, circunferência do braço, circunferência muscular do braço), parâmetros bioquímicos (albumina e contagem total de linfócitos), Avaliação Global Subjetiva, escore de Mendenhall e força do aperto de mão pela dinamometria. Os valores da porcentagem de adequação dos parâmetros foram utilizados para a classificação da desnutrição. Consideramos todos os pacientes com porcentagens de adequação abaixo de 90% como desnutridos. Foi criado o escore risco de desnutrição que se caracterizou pela alteração em qualquer um dos parâmetros da avaliação nutricional. Cerca de 53% dos pacientes eram do sexo masculino, 43% portadores de cirrose hepática, 80% com etiologia viral e média de idade de 54 12 anos. Houve relação estatisticamente significativa entre a classificação funcional da doença hepática e a AGS, o escore de Mendenhall e o de risco de desnutrição. A avaliação isolada da antropometria não se correlacionou com a classificação funcional. Segundo a AGS, a prevalência de desnutrição foi de 10% na hepatopatia não cirrótica, 16% na cirrose compensada e 94% na cirrose descompensada. Segundo o escore de Mendenhall, as cifras foram de 31%, 38% e 56%, respectivamente. Segundo o novo escore, as cifras foram de 52%, 60% e 96%, respectivamente. Embora tenha havido uma redução estatisticamente significativa da força muscular com o agravamento do estado nutricional, não foi possível estabelecer um ponto de corte para os valores da dinamometria. A análise do desempenho do percentual de adequação da força muscular como critério diagnóstico de pacientes sob risco de desnutrição revelou provavelmente 56% de falso-positivos e 24% de falso-negativos. A grande variação na prevalência de desnutrição em pacientes com doença hepática depende do instrumento de avaliação nutricional usado e da classificação funcional da doença hepática. Não surpreendentemente, os escores combinados detectaram as maiores taxas de prevalência de desnutrição. Houve associação significativa entre o estado nutricional e a gravidade da doença hepática. O aumento das taxas de prevalência de desnutrição trazido pela dinamometria ocorreu às custas de resultados falso-positivos.