88 resultados para slifetime-based garbage collection
Resumo:
Despite lake sensitivity to climate change, few Florida paleolimnological studies have focused on changes in hydrology. Evidence from Florida vegetation histories raise questions about long-term hydrologic history of Florida lakes, and a 25-year limnological dataset revealed recent climate-driven effects on Lake Annie. The objectives of this research are (1) to use modern diatom assemblages to develop methods for reconstruction of climatic and anthropogenic change (2) to reconstruct both long-term and recent histories of Lake Annie using diatom microfossils. Paleoenvironmental reconstruction models were developed from diatom assemblages of various habitat types from modern lakes. Plankton and sediment assemblages were similar, but epiphytes were distinct, suggesting differences in sediment delivery from different parts of the lakes. Relationships between a variety of physical and chemical data and the diatoms from each habitat type were explored. Total phosphorus (TP), pH, and color were found to be the most relevant variables for reconstruction, with sediment and epiphyte assemblages having the strongest relationships to those variables, six calibration models were constructed from the combination of these habitat types and environmental variables. Reconstructions utilizing the weighted averaging models in this study may be used to directly reveal TP, color, and pH changes from a sediment record, which might be suggestive of hydrologic change as well. These variables were reconstructed from the diatom record from both a long-term (11,000 year) and short-term (100 year) record and showed an interaction between climate-driven and local land-use impacts on Lake Annie. The long-term record begins with Lake Annie as a wetland, then the lake filled to a high stand around 4000 years ago. A period of relative stability after that point was interrupted near the turn of the last century by subtle changes in diatom communities that indicate acidification. Abrupt changes in the diatom communities around 1970 AD suggest recovery from acidification, but concurrent hydrologic change intensified anthropogenic effects on the lake. Diatom evidence for alkalization and phosphorus loading correspond to changes seen in the limnological record.
Resumo:
The spatial and temporal distribution of modern diatom assemblages in surface sediments, on the most dominant macrophytes, and in the water column at 96 locations in Florida Bay, Biscayne Bay and adjacent regions were examined in order to develop paleoenvironmental prediction models for this region. Analyses of these distributions revealed distinct temporal and spatial differences in assemblages among the locations. The differences among diatom assemblages living on subaquatic vegetation and sediments, and in the water column were significant. Because concentrations of salts, total phosphorus (WTP), total nitrogen (WTN) and total organic carbon (WTOC) are partly controlled by water management in this region, diatom-based models were produced to assess these variables. Discriminant function analyses showed that diatoms can also be successfully used to reconstruct changes in the abundance of diatom assemblages typical for different habitats and life habits. ^ To interpret paleoenvironmental changes, changes in salinity, WTN, WTP and WTOC were inferred from diatoms preserved in sediment cores collected along environmental gradients in Florida Bay (4 cores) and from nearshore and offshore locations in Biscayne Bay (3 cores). The reconstructions showed that water quality conditions in these estuaries have been fluctuating for thousands of years due to natural processes and sea-level changes, but almost synchronized shifts in diatom assemblages occurred in the mid-1960’s at all coring locations (except Ninemile Bank and Bob Allen Bank in Florida Bay). These alterations correspond to the major construction of numerous water management structures on the mainland. Additionally, all the coring sites (except Card Sound Bank, Biscayne Bay and Trout Cove, Florida Bay) showed decreasing salinity and fluctuations in nutrient levels in the last two decades that correspond to increased rainfall in the 1990’s and increased freshwater discharge to the bays, a result of increased freshwater deliveries to the Everglades by South Florida Water Management District in the 1980’s and 1990’s. Reconstructions of the abundance of diatom assemblages typical for different habitats and life habits revealed multiple sources of diatoms to the coring locations and that epiphytic assemblages in both bays increased in abundance since the early 1990’s. ^
Resumo:
Subtitle D of the Resource Conservation and Recovery Act (RCRA) requires a post closure period of 30 years for non hazardous wastes in landfills. Post closure care (PCC) activities under Subtitle D include leachate collection and treatment, groundwater monitoring, inspection and maintenance of the final cover, and monitoring to ensure that landfill gas does not migrate off site or into on site buildings. The decision to reduce PCC duration requires exploration of a performance based methodology to Florida landfills. PCC should be based on whether the landfill is a threat to human health or the environment. Historically no risk based procedure has been available to establish an early end to PCC. Landfill stability depends on a number of factors that include variables that relate to operations both before and after the closure of a landfill cell. Therefore, PCC decisions should be based on location specific factors, operational factors, design factors, post closure performance, end use, and risk analysis. The question of appropriate PCC period for Florida’s landfills requires in depth case studies focusing on the analysis of the performance data from closed landfills in Florida. Based on data availability, Davie Landfill was identified as case study site for a case by case analysis of landfill stability. The performance based PCC decision system developed by Geosyntec Consultants was used for the assessment of site conditions to project PCC needs. The available data for leachate and gas quantity and quality, ground water quality, and cap conditions were evaluated. The quality and quantity data for leachate and gas were analyzed to project the levels of pollutants in leachate and groundwater in reference to maximum contaminant level (MCL). In addition, the projected amount of gas quantity was estimated. A set of contaminants (including metals and organics) were identified as contaminants detected in groundwater for health risk assessment. These contaminants were selected based on their detection frequency and levels in leachate and ground water; and their historical and projected trends. During the evaluations a range of discrepancies and problems that related to the collection and documentation were encountered and possible solutions made. Based on the results of PCC performance integrated with risk assessment, projection of future PCC monitoring needs and sustainable waste management options were identified. According to these results, landfill gas monitoring can be terminated, leachate and groundwater monitoring for parameters above MCL and surveying of the cap integrity should be continued. The parameters which cause longer monitoring periods can be eliminated for the future sustainable landfills. As a conclusion, 30 year PCC period can be reduced for some of the landfill components based on their potential impacts to human health and environment (HH&E).
Resumo:
Performance-based maintenance contracts differ significantly from material and method-based contracts that have been traditionally used to maintain roads. Road agencies around the world have moved towards a performance-based contract approach because it offers several advantages like cost saving, better budgeting certainty, better customer satisfaction with better road services and conditions. Payments for the maintenance of road are explicitly linked to the contractor successfully meeting certain clearly defined minimum performance indicators in these contracts. Quantitative evaluation of the cost of performance-based contracts has several difficulties due to the complexity of the pavement deterioration process. Based on a probabilistic analysis of failures of achieving multiple performance criteria over the length of the contract period, an effort has been made to develop a model that is capable of estimating the cost of these performance-based contracts. One of the essential functions of such model is to predict performance of the pavement as accurately as possible. Prediction of future degradation of pavement is done using Markov Chain Process, which requires estimating transition probabilities from previous deterioration rate for similar pavements. Transition probabilities were derived using historical pavement condition rating data, both for predicting pavement deterioration when there is no maintenance, and for predicting pavement improvement when maintenance activities are performed. A methodological framework has been developed to estimate the cost of maintaining road based on multiple performance criteria such as crack, rut and, roughness. The application of the developed model has been demonstrated via a real case study of Miami Dade Expressways (MDX) using pavement condition rating data from Florida Department of Transportation (FDOT) for a typical performance-based asphalt pavement maintenance contract. Results indicated that the pavement performance model developed could predict the pavement deterioration quite accurately. Sensitivity analysis performed shows that the model is very responsive to even slight changes in pavement deterioration rate and performance constraints. It is expected that the use of this model will assist the highway agencies and contractors in arriving at a fair contract value for executing long term performance-based pavement maintenance works.
Resumo:
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
Resumo:
Silicon photonics is a very promising technology for future low-cost high-bandwidth optical telecommunication applications down to the chip level. This is due to the high degree of integration, high optical bandwidth and large speed coupled with the development of a wide range of integrated optical functions. Silicon-based microring resonators are a key building block that can be used to realize many optical functions such as switching, multiplexing, demultiplaxing and detection of optical wave. The ability to tune the resonances of the microring resonators is highly desirable in many of their applications. In this work, the study and application of a thermally wavelength-tunable photonic switch based on silicon microring resonator is presented. Devices with 10μm diameter were systematically studied and used in the design. Its resonance wavelength was tuned by thermally induced refractive index change using a designed local micro-heater. While thermo-optic tuning has moderate speed compared with electro-optic and all-optic tuning, with silicon’s high thermo-optic coefficient, a much wider wavelength tunable range can be realized. The device design was verified and optimized by optical and thermal simulations. The fabrication and characterization of the device was also implemented. The microring resonator has a measured FSR of ∼18 nm, FWHM in the range 0.1-0.2 nm and Q around 10,000. A wide tunable range (>6.4 nm) was achieved with the switch, which enables dense wavelength division multiplexing (DWDM) with a channel space of 0.2nm. The time response of the switch was tested on the order of 10 μs with a low power consumption of ∼11.9mW/nm. The measured results are in agreement with the simulations. Important applications using the tunable photonic switch were demonstrated in this work. 1×4 and 4×4 reconfigurable photonic switch were implemented by using multiple switches with a common bus waveguide. The results suggest the feasibility of on-chip DWDM for the development of large-scale integrated photonics. Using the tunable switch for output wavelength control, a fiber laser was demonstrated with Erbium-doped fiber amplifier as the gain media. For the first time, this approach integrated on-chip silicon photonic wavelength control.
Resumo:
Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. Mobile wireless communications have witnessed the adoption of several generations, each of them complementing and improving the former. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. 4G is a collection of technologies and standards that will allow a range of ubiquitous computing and wireless communication architectures. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications from 100 Mbps, in high mobility links, to as high as 1 Gbps for low mobility users, in addition to high efficiency in the spectrum usage. On mobile wireless communications networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations, where a terrestrial infrastructure is unavailable. Thus, they must rely upon satellite coverage. Good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. This technique must adapt to the characteristics of the satellite channel and also be efficient in the use of allocated bandwidth. Satellite links are fading channels, when used by mobile users. Some measures designed to approach these fading environments make use of: (1) spatial diversity (two receive antenna configuration); (2) time diversity (channel interleaver/spreading techniques); and (3) upper layer FEC. The author proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. For this purpose, a good channel model is necessary.
Resumo:
The aim of this research was to demonstrate a high current and stable field emission (FE) source based on carbon nanotubes (CNTs) and electron multiplier microchannel plate (MCP) and design efficient field emitters. In recent years various CNT based FE devices have been demonstrated including field emission displays, x-ray source and many more. However to use CNTs as source in high powered microwave (HPM) devices higher and stable current in the range of few milli-amperes to amperes is required. To achieve such high current we developed a novel technique of introducing a MCP between CNT cathode and anode. MCP is an array of electron multipliers; it operates by avalanche multiplication of secondary electrons, which are generated when electrons strike channel walls of MCP. FE current from CNTs is enhanced due to avalanche multiplication of secondary electrons and in addition MCP also protects CNTs from irreversible damage during vacuum arcing. Conventional MCP is not suitable for this purpose due to the lower secondary emission properties of their materials. To achieve higher and stable currents we have designed and fabricated a unique ceramic MCP consisting of high SEY materials. The MCP was fabricated utilizing optimum design parameters, which include channel dimensions and material properties obtained from charged particle optics (CPO) simulation. Child Langmuir law, which gives the optimum current density from an electron source, was taken into account during the system design and experiments. Each MCP channel consisted of MgO coated CNTs which was chosen from various material systems due to its very high SEY. With MCP inserted between CNT cathode and anode stable and higher emission current was achieved. It was ∼25 times higher than without MCP. A brighter emission image was also evidenced due to enhanced emission current. The obtained results are a significant technological advance and this research holds promise for electron source in new generation lightweight, efficient and compact microwave devices for telecommunications in satellites or space applications. As part of this work novel emitters consisting of multistage geometry with improved FE properties were was also developed.
Resumo:
In communities throughout the developing world, faith-based organizations (FBOs) focus on goals such as eradicating poverty, bolstering local economies, and fostering community development, while premising their activities and interaction with local communities on theological and religious understandings. Due to their pervasive interaction with participants, the religious ideologies of these FBOs impact the religious, economic, and social realities of communities. This study investigates the relationship between the international FBO, World Vision International (WVI), and changes to religious, economic, and social ideologies and practices in Andean indigenous communities in southern Peruvian. This study aims to contribute to the greater knowledge and understanding of (1) institutionalized development strategies, (2) faith-based development, and (3) how institutionalized development interacts with processes of socio-cultural change. Based on fifteen months of field research, this study involved qualitative and quantitative methods of participant-observation, interviews, surveys, and document analysis. Data were primarily collected from households from a sample of eight communities in the Pitumarca and Combapata districts, department of Canchis, province of Cusco, Peru where two WVI Area Development Programs were operating. Research findings reveal that there is a relationship between WVI’s intervention and some changes to religious, economic, and social structure (values, ideologies, and norms) and practices, demonstrating that structure and practices change when social systems are altered by new social actors. Findings also revealed that the impacts of WVI’s intervention greatly increased over the course of several years, demonstrating that changes in structure and practice occur gradually and need a period of time to take root. Finally, results showed that the impacts of WVI’s intervention were primarily limited to those most closely involved with the organization, revealing that the ability of one social actor to incite changes in the structure and practice of another actor is associated with the intensity of the relationship between the social actors. The findings of this study should be useful in ascertaining deductions and strengthening understandings of how faith-based development organizations impact aspects of religious, economic, and social life in the areas where they work.
Resumo:
Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.
Resumo:
Unmanned Aerial Vehicles (UAVs) may develop cracks, erosion, delamination or other damages due to aging, fatigue or extreme loads. Identifying these damages is critical for the safe and reliable operation of the systems. ^ Structural Health Monitoring (SHM) is capable of determining the conditions of systems automatically and continually through processing and interpreting the data collected from a network of sensors embedded into the systems. With the desired awareness of the systems’ health conditions, SHM can greatly reduce operational cost and speed up maintenance processes. ^ The purpose of this study is to develop an effective, low-cost, flexible and fault tolerant structural health monitoring system. The proposed Index Based Reasoning (IBR) system started as a simple look-up-table based diagnostic system. Later, Fast Fourier Transformation analysis and neural network diagnosis with self-learning capabilities were added. The current version is capable of classifying different health conditions with the learned characteristic patterns, after training with the sensory data acquired from the operating system under different status. ^ The proposed IBR systems are hierarchy and distributed networks deployed into systems to monitor their health conditions. Each IBR node processes the sensory data to extract the features of the signal. Classifying tools are then used to evaluate the local conditions with health index (HI) values. The HI values will be carried to other IBR nodes in the next level of the structured network. The overall health condition of the system can be obtained by evaluating all the local health conditions. ^ The performance of IBR systems has been evaluated by both simulation and experimental studies. The IBR system has been proven successful on simulated cases of a turbojet engine, a high displacement actuator, and a quad rotor helicopter. For its application on experimental data of a four rotor helicopter, IBR also performed acceptably accurate. The proposed IBR system is a perfect fit for the low-cost UAVs to be the onboard structural health management system. It can also be a backup system for aircraft and advanced Space Utility Vehicles. ^
Resumo:
This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.
Resumo:
This dissertation is one of the earliest to systematically apply and empirically test the resource-based view (RBV) in the context of nascent social ventures in a large scale study. Social ventures are entrepreneurial ventures organized as nonprofit, for-profit, or hybrid organizations whose primary purpose is to address unmet social needs and create social value. Nascent social ventures face resource gaps and engage in partnerships or alliances as one means to access external resources. These partnerships with different sectors facilitate social venture innovative and earned income strategies, and assist in the development of adequate heterogeneous resource conditions that impact competitive advantage. Competitive advantage in the context of nascent social ventures is achieved through the creation of value and the achievement of venture development activities and launching. The relationships between partnerships, heterogeneous resource conditions, strategies, and competitive advantage are analyzed in the context of nascent social ventures that participated in business plan competitions. A content analysis of 179 social venture business plans and an exploratory follow-up survey of 72 of these ventures are used to analyze these relationships using regression, ANOVA, correlations, t-tests, and non-parametric statistics. The findings suggest a significant positive relationship between competitive advantage and partnership diversity, heterogeneous resource conditions, social innovation, and earned income. Social capital is the type of resource most significantly related to competitive advantage. Founder previous start-up experience, client location, and business plan completeness are also found to be significant in the relationship between partnership diversity and competitive advantage. Finally the findings suggest that hybrid social ventures create a greater competitive advantage than nonprofit or for-profit social ventures. Consequently, this dissertation not only provides academics further insight into the factors that impact nascent social value creation, venture development, and ability to launch, but also offers practitioners guidance on how best to organize certain processes to create a competitive advantage. As a result more insight is gained into the nascent social venture creation process and how these ventures can have a greater impact on society.
Resumo:
Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.