9 resultados para center-peak intensity ratio

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Locard exchange principle proposes that a person can not enter or leave an area or come in contact with an object, without an exchange of materials. In the case of scent evidence, the suspect leaves his scent in the location of the crime scene itself or on objects found therein. Human scent evidence collected from a crime scene can be evaluated through the use of specially trained canines to determine an association between the evidence and a suspect. To date, there has been limited research as to the volatile organic compounds (VOCs) which comprise human odor and their usefulness in distinguishing among individuals. For the purposes of this research, human scent is defined as the most abundant volatile organic compounds present in the headspace above collected odor samples. ^ An instrumental method has been created for the analysis of the VOCs present in human scent, and has been utilized for the optimization of materials used for the collection and storage of human scent evidence. This research project has identified the volatile organic compounds present in the headspace above collected scent samples from different individuals and various regions of the body, with the primary focus involving the armpit area and the palms of the hands. Human scent from the armpit area and palms of an individual sampled over time shows lower variation in the relative peak area ratio of the common compounds present than what is seen across a population. A comparison of the compounds present in human odor for an individual over time, and across a population has been conducted and demonstrates that it is possible to instrumentally differentiate individuals based on the volatile organic compounds above collected odor samples. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study demonstrates the compositional heterogeneity of a protein-like fluorescence emission signal (T-peak; excitation/emission maximum at 280/325 nm) of dissolved organic matter (DOM) samples collected from subtropical river and estuarine environments. Natural water samples were collected from the Florida Coastal Everglades ecosystem. The samples were ultrafiltered and excitation–emission fluorescence matrices were obtained. The T-peak intensity correlated positively with N concentration of the ultrafiltered DOM solution (UDON), although, the low correlation coefficient (r2=0.140, p<0.05) suggested the coexistence of proteins with other classes of compounds in the T-peak. As such, the T-peak was unbundled on size exclusion chromatography. The elution curves showed that the T-peak was composed of two compounds with distinct molecular weights (MW) with nominal MWs of about >5×104 (T1) and ∼7.6×103 (T2) and with varying relative abundance among samples. The T1-peak intensity correlated strongly with [UDON] (r2=0.516, p<0.001), while T2-peak did not, which suggested that the T-peak is composed of a mixture of compounds with different chemical structures and ecological roles, namely proteinaceous materials and presumably phenolic moieties in humic-like substances. Natural source of the latter may include polyphenols leached from senescent plant materials, which are important precursors of humic substances. This idea is supported by the fact that polyphenols, such as gallic acid, an important constituent of hydrolysable tannins, and condensed tannins extracted from red mangrove (Rhizophora mangle) leaves exhibited the fluorescence peak in the close vicinity of the T-peak (260/346 and 275/313 nm, respectively). Based on this study the application of the T-peak as a proxy for [DON] in natural waters may have limitations in coastal zones with significant terrestrial DOM input.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30–40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30–40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet/Aoutlet<1, this in effect reduced the mixing of air inside the building and hence the ventilation effectiveness; (ii) the presence of multi-room partitioning increased the pressure differential and consequently the air exchange rate. Overall good agreement was found between the observed large-scale, small-scale and CFD based IP responses. Comparisons with ASCE 7-10 consistently demonstrated that the code underestimated peak positive and suction IP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Widespread damage to roofing materials (such as tiles and shingles) for low-rise buildings, even for weaker hurricanes, has raised concerns regarding design load provisions and construction practices. Currently the building codes used for designing low-rise building roofs are mainly based on testing results from building models which generally do not simulate the architectural features of roofing materials that may significantly influence the wind-induced pressures. Full-scale experimentation was conducted under high winds to investigate the effects of architectural details of high profile roof tiles and asphalt shingles on net pressures that are often responsible for damage to these roofing materials. Effects on the vulnerability of roofing materials were also studied. Different roof models with bare, tiled, and shingled roof decks were tested. Pressures acting on both top and bottom surfaces of the roofing materials were measured to understand their effects on the net uplift loading. The area-averaged peak pressure coefficients obtained from bare, tiled, and shingled roof decks were compared. In addition, a set of wind tunnel tests on a tiled roof deck model were conducted to verify the effects of tiles' cavity internal pressure. Both the full-scale and the wind tunnel test results showed that underside pressure of a roof tile could either aggravate or alleviate wind uplift on the tile based on its orientation on the roof with respect to the wind angle of attack. For shingles, the underside pressure could aggravate wind uplift if the shingle is located near the center of the roof deck. Bare deck modeling to estimate design wind uplift on shingled decks may be acceptable for most locations but not for field locations; it could underestimate the uplift on shingles by 30-60%. In addition, some initial quantification of the effects of roofing materials on wind uplift was performed by studying the wind uplift load ratio for tiled versus bare deck and shingled versus bare deck. Vulnerability curves, with and without considering the effects of tiles' cavity internal pressure, showed significant differences. Aerodynamic load provisions for low-rise buildings' roofs and their vulnerability can thus be more accurately evaluated by considering the effects of the roofing materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^