904 resultados para Turn Around Time
Resumo:
Dissolved seawater neodymium isotopes, radium isotopes and rare earth element concentrations measured in coastal waters around Oahu and at HOT-ALOHA. Data from R/V Kilo Moana cruise KM1107 supplement by data from Kilo Moana cruises KM1215 (Hoe-Dylan V), KM1219 (Hoe-Dylan IX), KM1309 (Hoe-Phor I) and KM1316 (Hoe-Phor II).
Resumo:
The data set consists of maps of total velocity of the surface current in the Southeastern Bay of Biscay averaged over a time interval of 1 hour around the cardinal hour. Surface ocean velocities estimated by this HF Radar(4.65 MHz) are representative of the upper 2-3 meters of the ocean. The main objective of near real time processing is to produce the best product from available data at the time of processing. Total velocities are derived using least square fit that maps radial velocities measured from individual sites onto a cartesian grid. The final product is a map of the horizontal components of the ocean currents on a regular grid in the area of overlap of two or more radar stations.
Resumo:
Composition and concentration of colored dissolved organic matter (CDOM) have been determined in Hudson Bay and Hudson Strait by excitation emission matrix spectroscopy (EEM) and parallel factor analysis (PARAFAC). Based on 63 surface samples, PARAFAC identified three fluorescent components, which were attributed to two humic- and one protein-like components. One humic-like component was identified as representing terrestrial organic matter and showed a conservative behaviour in Hudson Bay estuaries. The second humic-like component, traditionally identified as peak M, originated both from land and produced in the marine environment. Component 3 had spectra resembling protein-like material and thought to be plankton-derived. The distribution and composition of CDOM were largely controlled by water mass mixing with protein-like component being the least affected. Distinctive fluorescence patterns were also found between Hudson Bay and Hudson Strait, suggesting different sources of CDOM. The optically active fraction of DOC (both absorbing and fluorescing) was very high in the Hudson Bay (up to 89%) suggesting that fluorescence and absorbance can be used as proxies of the DOC concentration.
Resumo:
The neoliberal period was accompanied by a momentous transformation within the US health care system. As the result of a number of political and historical dynamics, the healthcare law signed by President Barack Obama in 2010 ‑the Affordable Care Act (ACA)‑ drew less on universal models from abroad than it did on earlier conservative healthcare reform proposals. This was in part the result of the influence of powerful corporate healthcare interests. While the ACA expands healthcare coverage, it does so incompletely and unevenly, with persistent uninsurance and disparities in access based on insurance status. Additionally, the law accommodates an overall shift towards a consumerist model of care characterized by high cost sharing at time of use. Finally, the law encourages the further consolidation of the healthcare sector, for instance into units named “Accountable Care Organizations” that closely resemble the health maintenance organizations favored by managed care advocates. The overall effect has been to maintain a fragmented system that is neither equitable nor efficient. A single payer universal system would, in contrast, help transform healthcare into a social right.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
Supply Chain Simulation (SCS) is applied to acquire information to support outsourcing decisions but obtaining enough detail in key parameters can often be a barrier to making well informed decisions.
One aspect of SCS that has been relatively unexplored is the impact of inaccurate data around delays within the SC. The impact of the magnitude and variability of process cycle time on typical performance indicators in a SC context is studied.
System cycle time, WIP levels and throughput are more sensitive to the magnitude of deterministic deviations in process cycle time than variable deviations. Manufacturing costs are not very sensitive to these deviations.
Future opportunities include investigating the impact of process failure or product defects, including logistics and transportation between SC members and using alternative costing methodologies.
Resumo:
Grazing practices in rangelands are increasingly recognized as a management tool for environmental protection in addition to livestock production. Long term continuous grazing has been largely documented to reduce pasture productivity and decline the protective layer of soil surface affecting environmental protection. Time-controlled rotational grazing (TC grazing) as an alternative to continuous grazing is considered to reduce such negative effects and provides pasture with a higher amount of vegetation securing food for animals and conserving environment. To research on how the grazing system affects herbage and above ground organic materials compared with continuous grazing, the study was conducted in a sub-tropical region of Australia from 2001 to 2006. The overall results showed that herbage mass under TC grazing increased to 140% in 2006 compared with the first records taken in 2001. The outcomes were even higher (150%) when the soil is deeper and the slope is gentle. In line with the results of herbage mass, ground cover under TC grazing achieved significant higher percentages than continuous grazing in all the years of the study. Ground cover under TC grazing increased from 54% in 2003 to 73%, 82%, and 89% in 2004, 2005, and 2006, respectively, despite the fact that after the high yielding year of 2004 herbage mass declined to around 2.5 ton ha^(−1) in 2005 and 2006. Under continuous grazing however there was no significant increase over time comparable to TC grazing neither in herbage mass nor in ground cover. The successful outcome is largely attributed to the flexible nature of the management in which grazing frequency, durations and the rest periods were efficiently controlled. Such flexibility of animal presence on pastures could result in higher water retention and soil moisture condition promoting above ground organic material.
Resumo:
[EN]Active Vision Systems can be considered as dynamical systems which close the loop around artificial visual perception, controlling camera parameters, motion and also controlling processing to simplify, accelerate and do more robust visual perception. Research and Development in Active Vision Systems [Aloi87], [Bajc88] is a main area of interest in Computer Vision, mainly by its potential application in different scenarios where real-time performance is needed such as robot navigation, surveillance, visual inspection, among many others. Several systems have been developed during last years using robotic-heads for this purpose...
Resumo:
Following the intrinsically linked balance sheets in his Capital Formation Life Cycle, Lukas M. Stahl explains with his Triple A Model of Accounting, Allocation and Accountability the stages of the Capital Formation process from FIAT to EXIT. Based on the theoretical foundations of legal risk laid by the International Bar Association with the help of Roger McCormick and legal scholars such as Joanna Benjamin, Matthew Whalley and Tobias Mahler, and founded on the basis of Wesley Hohfeld’s category theory of jural relations, Stahl develops his mutually exclusive Four Determinants of Legal Risk of Law, Lack of Right, Liability and Limitation. Those Four Determinants of Legal Risk allow us to apply, assess, and precisely describe the respective legal risk at all stages of the Capital Formation Life Cycle as demonstrated in case studies of nine industry verticals of the proposed and currently negotiated Transatlantic Trade and Investment Partnership between the United States of America and the European Union, TTIP, as well as in the case of the often cited financing relation between the United States and the People’s Republic of China. Having established the Four Determinants of Legal Risk and its application to the Capital Formation Life Cycle, Stahl then explores the theoretical foundations of capital formation, their historical basis in classical and neo-classical economics and its forefathers such as The Austrians around Eugen von Boehm-Bawerk, Ludwig von Mises and Friedrich von Hayek and most notably and controversial, Karl Marx, and their impact on today’s exponential expansion of capital formation. Starting off with the first pillar of his Triple A Model, Accounting, Stahl then moves on to explain the Three Factors of Capital Formation, Man, Machines and Money and shows how “value-added” is created with respect to the non-monetary capital factors of human resources and industrial production. Followed by a detailed analysis discussing the roles of the Three Actors of Monetary Capital Formation, Central Banks, Commercial Banks and Citizens Stahl readily dismisses a number of myths regarding the creation of money providing in-depth insight into the workings of monetary policy makers, their institutions and ultimate beneficiaries, the corporate and consumer citizens. In his second pillar, Allocation, Stahl continues his analysis of the balance sheets of the Capital Formation Life Cycle by discussing the role of The Five Key Accounts of Monetary Capital Formation, the Sovereign, Financial, Corporate, Private and International account of Monetary Capital Formation and the associated legal risks in the allocation of capital pursuant to his Four Determinants of Legal Risk. In his third pillar, Accountability, Stahl discusses the ever recurring Crisis-Reaction-Acceleration-Sequence-History, in short: CRASH, since the beginning of the millennium starting with the dot-com crash at the turn of the millennium, followed seven years later by the financial crisis of 2008 and the dislocations in the global economy we are facing another seven years later today in 2015 with several sordid debt restructurings under way and hundred thousands of refugees on the way caused by war and increasing inequality. Together with the regulatory reactions they have caused in the form of so-called landmark legislation such as the Sarbanes-Oxley Act of 2002, the Dodd-Frank Act of 2010, the JOBS Act of 2012 or the introduction of the Basel Accords, Basel II in 2004 and III in 2010, the European Financial Stability Facility of 2010, the European Stability Mechanism of 2012 and the European Banking Union of 2013, Stahl analyses the acceleration in size and scope of crises that appears to find often seemingly helpless bureaucratic responses, the inherent legal risks and the complete lack of accountability on part of those responsible. Stahl argues that the order of the day requires to address the root cause of the problems in the form of two fundamental design defects of our Global Economic Order, namely our monetary and judicial order. Inspired by a 1933 plan of nine University of Chicago economists abolishing the fractional reserve system, he proposes the introduction of Sovereign Money as a prerequisite to void misallocations by way of judicial order in the course of domestic and transnational insolvency proceedings including the restructuring of sovereign debt throughout the entire monetary system back to its origin without causing domino effects of banking collapses and failed financial institutions. In recognizing Austrian-American economist Schumpeter’s Concept of Creative Destruction, as a process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one, Stahl responds to Schumpeter’s economic chemotherapy with his Concept of Equitable Default mimicking an immunotherapy that strengthens the corpus economicus own immune system by providing for the judicial authority to terminate precisely those misallocations that have proven malignant causing default perusing the century old common law concept of equity that allows for the equitable reformation, rescission or restitution of contract by way of judicial order. Following a review of the proposed mechanisms of transnational dispute resolution and current court systems with transnational jurisdiction, Stahl advocates as a first step in order to complete the Capital Formation Life Cycle from FIAT, the creation of money by way of credit, to EXIT, the termination of money by way of judicial order, the institution of a Transatlantic Trade and Investment Court constituted by a panel of judges from the U.S. Court of International Trade and the European Court of Justice by following the model of the EFTA Court of the European Free Trade Association. Since the first time his proposal has been made public in June of 2014 after being discussed in academic circles since 2011, his or similar proposals have found numerous public supporters. Most notably, the former Vice President of the European Parliament, David Martin, has tabled an amendment in June 2015 in the course of the negotiations on TTIP calling for an independent judicial body and the Member of the European Commission, Cecilia Malmström, has presented her proposal of an International Investment Court on September 16, 2015. Stahl concludes, that for the first time in the history of our generation it appears that there is a real opportunity for reform of our Global Economic Order by curing the two fundamental design defects of our monetary order and judicial order with the abolition of the fractional reserve system and the introduction of Sovereign Money and the institution of a democratically elected Transatlantic Trade and Investment Court that commensurate with its jurisdiction extending to cases concerning the Transatlantic Trade and Investment Partnership may complete the Capital Formation Life Cycle resolving cases of default with the transnational judicial authority for terminal resolution of misallocations in a New Global Economic Order without the ensuing dangers of systemic collapse from FIAT to EXIT.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
We investigate the application of time-reversed electromagnetic wave propagation to transmit energy in a wireless power transmission system. “Time reversal” is a signal focusing method that exploits the time reversal invariance of the lossless wave equation to direct signals onto a single point inside a complex scattering environment. In this work, we explore the properties of time reversed microwave pulses in a low-loss ray-chaotic chamber. We measure the spatial profile of the collapsing wavefront around the target antenna, and demonstrate that time reversal can be used to transfer energy to a receiver in motion. We demonstrate how nonlinear elements can be controlled to selectively focus on one target out of a group. Finally, we discuss the design of a rectenna for use in a time reversal system. We explore the implication of these results, and how they may be applied in future technologies.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Institutions are widely regarded as important, even ultimate drivers of economic growth and performance. A recent mainstream of institutional economics has concentrated on the effect of persisting, often imprecisely measured institutions and on cataclysmic events as agents of noteworthy institutional change. As a consequence, institutional change without large-scale shocks has received little attention. In this dissertation I apply a complementary, quantitative-descriptive approach that relies on measures of actually enforced institutions to study institutional persistence and change over a long time period that is undisturbed by the typically studied cataclysmic events. By placing institutional change into the center of attention one can recognize different speeds of institutional innovation and the continuous coexistence of institutional persistence and change. Specifically, I combine text mining procedures, network analysis techniques and statistical approaches to study persistence and change in England’s common law over the Industrial Revolution (1700-1865). Based on the doctrine of precedent - a peculiarity of common law systems - I construct and analyze the apparently first citation network that reflects lawmaking in England. Most strikingly, I find large-scale change in the making of English common law around the turn of the 19th century - a period free from the typically studied cataclysmic events. Within a few decades a legal innovation process with low depreciation rates (1 to 2 percent) and strong past-persistence transitioned to a present-focused innovation process with significantly higher depreciation rates (4 to 6 percent) and weak past-persistence. Comparison with U.S. Supreme Court data reveals a similar U.S. transition towards the end of the 19th century. The English and U.S. transitions appear to have unfolded in a very specific manner: a new body of law arose during the transitions and developed in a self-referential manner while the existing body of law lost influence, but remained prominent. Additional findings suggest that Parliament doubled its influence on the making of case law within the first decades after the Glorious Revolution and that England’s legal rules manifested a high degree of long-term persistence. The latter allows for the possibility that the often-noted persistence of institutional outcomes derives from the actual persistence of institutions.
Resumo:
French Impressionism is a term which is often used in discussing music originating in France towards the end of the nineteenth century. The term Spanish Impressionism could also be used when discussing Spanish music written by the Spanish composers who studied and worked in Paris at the same time as their French counterparts. After all, Spanish music written during this time exhibits many of the same characteristics and aesthetics as French music of the same era. This dissertation will focus on the French and Spanish composers writing during that exciting time. Musical impressionism emphasizes harmonic effects and rhythmic fluidity in the pursuit of evocative moods, sound pictures of nature or places over the formalism of structure and thematic concerns. The music of this time is highly virtuosic as well as musically demanding, since many of the composers were brilliant pianists. My three dissertation recitals concentrated on works which exhibited the many facets of impressionism as well as the technical and musical challenges. The repertoire included selections by Spanish composers Manuel de Falla, Isaac Albéniz, Enrique Granados, Joaquín Turina, and Joaquín Rodrigo and French composers Claude Debussy and Maurice Ravel. The recitals were on April 30, 2013, February 23, 2014 and October 11, 2015. They included solo piano works by Granados and Albéniz, vocal works by Debussy, Ravel, de Falla, Turina and Rodrigo, piano trios by Granados and Turina, instrumental duos by Debussy, Ravel and de Falla, and a two-piano work of Debussy transcribed by Ravel. All three recitals were held in Gildenhorn Recital Hall at the University of Maryland and copies of this dissertation and recordings of each recital may be found through the Digital Repository at the University of Maryland (DRUM).