986 resultados para Modern state
Resumo:
The urgent need for teachers led the Florida legislature in 1887 to establish the Florida State Normal College at DeFuniak Springs. The college closed in 1905 with passage of the Buckman Act, which mandated a complete reorganization of state-supported higher education and ended coeducation for white students. This small college, open for eighteen years, was uniquely situated in time and place to examine larger questions in American educational history as well as contribute to the history of higher education in Florida, which developed differently than in other states.^ This historical case study used archival sources to examine this institution, and contribute to the history of the origins of Florida's system of higher education. Key questions guiding the research were the nature of the students, fundamental aspects of school life, the impact of the school on the students, and the role of the school in the development of higher education in Florida. Original sources included the Catalogs, Register and Minutes of the school. The census of 1900 was used to develop information on the backgrounds of the students. Findings were: DeFuniak Springs was chosen for the school because of the Florida Chautauqua; the school was coeducational and had few rules but the internalized social codes of the students resulted in almost no difficulties with discipline; the students, a majority of whom were women, were from middle-class southern families; the college compared favorably in faculty, facilities and curriculum to institutions elsewhere; although few students graduated, alumni played a key role in shaping Florida's common schools; and, the Buckman Act entirely changed the nature of higher education in Florida.^ Implications were: The coeducational nature of the college a hundred years ago significantly changes the picture of Florida's higher education; the school was small, but its influence far outlasted the institution; and, the school struggled with issues which continue to trouble modern educators such as finances, the legislature, student retention, underpreparedness, and the proper structuring of a curriculum, which indicates the persistence of these issues. ^
Resumo:
From 1889 to 1934, Florida's nurses belonging to a new group of professional women ushered in a pioneering phase of public health nursing in Florida. During this era, the nurses' ability to confront health and professional issues varied a great deal but in quiet and forceful ways they tackled cultural and environmental problems to assist people who were ill or help prevent people from becoming ill. This dissertation places the development of professional public health nursing in its social context by uncovering the relationships public health nurses formed with clubwomen, the medical profession, city leaders, midwives, and others. In 1888, there were few graduate nurses in the state, no state board of health and no organized nursing service to respond to Jacksonville's great yellow fever epidemic. By 1934, national and state leaders of public health nursing had built up the profession to become an essential part of the State Board of Health's service to the community. Between these milestones, in the era of white supremacy and Jim Crow, public health nurses combined their professional training with a pioneer spirit of innovation and risk-taking. In the predominately rural state, the public health nurses' resolve to overcome environmental hazards and cultural obstacles stands out as they attempted to reach those who were unserved or underserved by modern medicine.
Resumo:
World War II profoundly impacted Florida. The military geography of the State is essential to an understanding the war. The geostrategic concerns of place and space determined that Florida would become a statewide military base. Florida's attributes of place such as climate and topography determined its use as a military academy hosting over two million soldiers, nearly 15 percent of the GI Army, the largest force the US ever raised. One-in-eight Floridians went into uniform. Equally, Florida's space on the planet made it central for both defensive and offensive strategies. The Second World War was a war of movement, and Florida was a major jump off point for US force projection world-wide, especially of air power. Florida's demography facilitated its use as a base camp for the assembly and engagement of this military power. In 1940, less than two percent of the US population lived in Florida, a quiet, barely populated backwater of the United States. But owing to its critical place and space, over the next few years it became a 65,000 square mile training ground, supply dump, and embarkation site vital to the US war effort. Because of its place astride some of the most important sea lanes in the Atlantic World, Florida was the scene of one of the few Western Hemisphere battles of the war. The militarization of Florida began long before Pearl Harbor. The pre-war buildup conformed to the US strategy of the war. The strategy of theUS was then (and remains today) one of forward defense: harden the frontier, then take the battle to the enemy, rather than fight them in North America. The policy of "Europe First," focused the main US war effort on the defeat of Hitler's Germany, evaluated to be the most dangerous enemy. In Florida were established the military forces requiring the longest time to develop, and most needed to defeat the Axis. Those were a naval aviation force for sea-borne hostilities, a heavy bombing force for reducing enemy industrial states, and an aerial logistics train for overseas supply of expeditionary campaigns. The unique Florida coastline made possible the seaborne invasion training demanded for US victory. The civilian population was employed assembling mass-produced first-generation container ships, while Floridahosted casualties, Prisoners-of-War, and transient personnel moving between the Atlantic and Pacific. By the end of hostilities and the lifting of Unlimited Emergency, officially on December 31, 1946, Floridahad become a transportation nexus. Florida accommodated a return of demobilized soldiers, a migration of displaced persons, and evolved into a modern veterans' colonia. It was instrumental in fashioning the modern US military, while remaining a center of the active National Defense establishment. Those are the themes of this work.
Resumo:
Renewable or sustainable energy (SE) sources have attracted the attention of many countries because the power generated is environmentally friendly, and the sources are not subject to the instability of price and availability. This dissertation presents new trends in the DC-AC converters (inverters) used in renewable energy sources, particularly for photovoltaic (PV) energy systems. A review of the existing technologies is performed for both single-phase and three-phase systems, and the pros and cons of the best candidates are investigated. In many modern energy conversion systems, a DC voltage, which is provided from a SE source or energy storage device, must be boosted and converted to an AC voltage with a fixed amplitude and frequency. A novel switching pattern based on the concept of the conventional space-vector pulse-width-modulated (SVPWM) technique is developed for single-stage, boost-inverters using the topology of current source inverters (CSI). The six main switching states, and two zeros, with three switches conducting at any given instant in conventional SVPWM techniques are modified herein into three charging states and six discharging states with only two switches conducting at any given instant. The charging states are necessary in order to boost the DC input voltage. It is demonstrated that the CSI topology in conjunction with the developed switching pattern is capable of providing the required residential AC voltage from a low DC voltage of one PV panel at its rated power for both linear and nonlinear loads. In a micro-grid, the active and reactive power control and consequently voltage regulation is one of the main requirements. Therefore, the capability of the single-stage boost-inverter in controlling the active power and providing the reactive power is investigated. It is demonstrated that the injected active and reactive power can be independently controlled through two modulation indices introduced in the proposed switching algorithm. The system is capable of injecting a desirable level of reactive power, while the maximum power point tracking (MPPT) dictates the desirable active power. The developed switching pattern is experimentally verified through a laboratory scaled three-phase 200W boost-inverter for both grid-connected and stand-alone cases and the results are presented.
Resumo:
This dissertation presents a study of the D( e, e′p)n reaction carried out at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) for a set of fixed values of four-momentum transfer Q 2 = 2.1 and 0.8 (GeV/c)2 and for missing momenta pm ranging from pm = 0.03 to pm = 0.65 GeV/c. The analysis resulted in the determination of absolute D(e,e′ p)n cross sections as a function of the recoiling neutron momentum and it's scattering angle with respect to the momentum transfer [vector] q. The angular distribution was compared to various modern theoretical predictions that also included final state interactions. The data confirmed the theoretical prediction of a strong anisotropy of final state interaction contributions at Q2 of 2.1 (GeV/c)2 while at the lower Q2 value, the anisotropy was much less pronounced. At Q2 of 0.8 (GeV/c)2, theories show a large disagreement with the experimental results. The experimental momentum distribution of the bound proton inside the deuteron has been determined for the first time at a set of fixed neutron recoil angles. The momentum distribution is directly related to the ground state wave function of the deuteron in momentum space. The high momentum part of this wave function plays a crucial role in understanding the short-range part of the nucleon-nucleon force. At Q2 = 2.1 (GeV/c)2, the momentum distribution determined at small neutron recoil angles is much less affected by FSI compared to a recoil angle of 75°. In contrast, at Q2 = 0.8 (GeV/c)2 there seems to be no region with reduced FSI for larger missing momenta. Besides the statistical errors, systematic errors of about 5–6 % were included in the final results in order to account for normalization uncertainties and uncertainties in the determi- nation of kinematic veriables. The measurements were carried out using an electron beam energy of 2.8 and 4.7 GeV with beam currents between 10 to 100 &mgr; A. The scattered electrons and the ejected protons originated from a 15cm long liquid deuterium target, and were detected in conicidence with the two high resolution spectrometers of Hall A at Jefferson Lab.^
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
In this study, we investigated the relationship between vegetation and modern-pollen rain along the elevational gradient of Mount Paggeo. We apply multivariate data analysis to assess the relationship between vegetation and modern-pollen rain and quantify the representativeness of forest zones. This study represents the first statistical analysis of pollen-vegetation relationship along an elevational gradient in Greece. Hence, this paper improves confidence in interpretation of palynological records from north-eastern Greece and may refine past climate reconstructions for a more accurate comparison of data and modelling. Numerical classification and ordination were performed on pollen data to assess differences among plant communities that beech (Fagus sylvatica) dominates or co-dominates. The results show a strong relationship between altitude, arboreal cover, human impact and variations in pollen and nonpollen palynomorph taxa percentages.
Resumo:
Este artículo hace una reconstrucción crítica de la visión de Keynes sobre la relación entre gasto público, tipo de interés, salarios y desempleo, tal y como se formula en su Tratado sobre el Dinero. El trabajo defiende que el enfoque de Keynes lleva a propuestas de política económica que enfatizan la necesidad de intervención estatal directa en la provisión de bienes y servicios. Esta conclusión se deriva de una interpretación circuitista de su obra.
Resumo:
The thesis is first and foremost the examination of the notion and consequences of ‘state failure’ in international law. The disputes surrounding criteria for creation and recognition of states pertain to efforts to analyse legal and factual issues unravelling throughout the continuing existence of states, as best evidenced by the ‘state failure’ phenomenon. It is argued that although the ‘statehood’ of failed states remains uncontested, their sovereignty is increasingly considered to be dependent on the existence of effective governments. The second part of this thesis focuses on the examinations of the legal consequences of the continuing existence of failed states in the context of jus ad bellum. Since the creation of the United Nations the ability of states to resort to armed force without violating what might be considered as the single most important norm of international law, has been considerably limited. State failure and increasing importance of non-state actors has become a greatly topical issue within recent years in both scholarship and the popular imagination. There have been important legal developments within international law, which have provoked much academic, and in particular, legal commentary. On one level, the thesis contributes to this commentary. Despite the fact that the international community continues to perpetuate a notion of ‘statehood’ which allows the state-centric system of international law to exist, when dealing with practical and political realities of state failure, international law may no longer consider external sovereignty of states as an undeniable entitlement to statehood. Accordingly, the main research question of this thesis is whether the implicit and explicit invocation of the state failure provides sufficient legal basis for the intervention in self-defence against non-state actors in located in failed states. It has been argued that state failure has a profound impact, the extent of which is yet to be fully explored, on the modern landscape of peace and security.
Resumo:
World War II profoundly impacted Florida. The military geography of the State is essential to an understanding the war. The geostrategic concerns of place and space determined that Florida would become a statewide military base. Florida’s attributes of place such as climate and topography determined its use as a military academy hosting over two million soldiers, nearly 15 percent of the GI Army, the largest force theUS ever raised. One-in-eight Floridians went into uniform. Equally,Florida’s space on the planet made it central for both defensive and offensive strategies. The Second World War was a war of movement, and Florida was a major jump off point forUSforce projection world-wide, especially of air power. Florida’s demography facilitated its use as a base camp for the assembly and engagement of this military power. In 1940, less than two percent of the US population lived in Florida, a quiet, barely populated backwater of the United States.[1] But owing to its critical place and space, over the next few years it became a 65,000 square mile training ground, supply dump, and embarkation site vital to the US war effort. Because of its place astride some of the most important sea lanes in the Atlantic World,Florida was the scene of one of the few Western Hemisphere battles of the war. The militarization ofFloridabegan long before Pearl Harbor. The pre-war buildup conformed to theUSstrategy of the war. The strategy of theUS was then (and remains today) one of forward defense: harden the frontier, then take the battle to the enemy, rather than fight them inNorth America. The policy of “Europe First,” focused the main US war effort on the defeat of Hitler’sGermany, evaluated to be the most dangerous enemy. In Florida were established the military forces requiring the longest time to develop, and most needed to defeat the Axis. Those were a naval aviation force for sea-borne hostilities, a heavy bombing force for reducing enemy industrial states, and an aerial logistics train for overseas supply of expeditionary campaigns. The unique Florida coastline made possible the seaborne invasion training demanded for USvictory. The civilian population was employed assembling mass-produced first-generation container ships, while Floridahosted casualties, Prisoners-of-War, and transient personnel moving between the Atlantic and Pacific. By the end of hostilities and the lifting of Unlimited Emergency, officially on December 31, 1946, Floridahad become a transportation nexus. Florida accommodated a return of demobilized soldiers, a migration of displaced persons, and evolved into a modern veterans’ colonia. It was instrumental in fashioning the modern US military, while remaining a center of the active National Defense establishment. Those are the themes of this work. [1] US Census of Florida 1940. Table 4 – Race, By Nativity and Sex, For the State. 14.
Resumo:
Renewable or sustainable energy (SE) sources have attracted the attention of many countries because the power generated is environmentally friendly, and the sources are not subject to the instability of price and availability. This dissertation presents new trends in the DC-AC converters (inverters) used in renewable energy sources, particularly for photovoltaic (PV) energy systems. A review of the existing technologies is performed for both single-phase and three-phase systems, and the pros and cons of the best candidates are investigated. In many modern energy conversion systems, a DC voltage, which is provided from a SE source or energy storage device, must be boosted and converted to an AC voltage with a fixed amplitude and frequency. A novel switching pattern based on the concept of the conventional space-vector pulse-width-modulated (SVPWM) technique is developed for single-stage, boost-inverters using the topology of current source inverters (CSI). The six main switching states, and two zeros, with three switches conducting at any given instant in conventional SVPWM techniques are modified herein into three charging states and six discharging states with only two switches conducting at any given instant. The charging states are necessary in order to boost the DC input voltage. It is demonstrated that the CSI topology in conjunction with the developed switching pattern is capable of providing the required residential AC voltage from a low DC voltage of one PV panel at its rated power for both linear and nonlinear loads. In a micro-grid, the active and reactive power control and consequently voltage regulation is one of the main requirements. Therefore, the capability of the single-stage boost-inverter in controlling the active power and providing the reactive power is investigated. It is demonstrated that the injected active and reactive power can be independently controlled through two modulation indices introduced in the proposed switching algorithm. The system is capable of injecting a desirable level of reactive power, while the maximum power point tracking (MPPT) dictates the desirable active power. The developed switching pattern is experimentally verified through a laboratory scaled three-phase 200W boost-inverter for both grid-connected and stand-alone cases and the results are presented.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
In modern society, security issues of IT Systems are intertwined with interdisciplinary aspects, from social life to sustainability, and threats endanger many aspects of every- one’s daily life. To address the problem, it’s important that the systems that we use guarantee a certain degree of security, but to achieve this, it is necessary to be able to give a measure to the amount of security. Measuring security is not an easy task, but many initiatives, including European regulations, want to make this possible. One method of measuring security is based on the use of security metrics: those are a way of assessing, from various aspects, vulnera- bilities, methods of defense, risks and impacts of successful attacks then also efficacy of reactions, giving precise results using mathematical and statistical techniques. I have done literature research to provide an overview on the meaning, the effects, the problems, the applications and the overall current situation over security metrics, with particular emphasis in giving practical examples. This thesis starts with a summary of the state of the art in the field of security met- rics and application examples to outline the gaps in current literature, the difficulties found in the change of application context, to then advance research questions aimed at fostering the discussion towards the definition of a more complete and applicable view of the subject. Finally, it stresses the lack of security metrics that consider interdisciplinary aspects, giving some potential starting point to develop security metrics that cover all as- pects involved, taking the field to a new level of formal soundness and practical usability.