963 resultados para PERFORMANCE PROFILES
Resumo:
The Tara Oceans Expedition (2009-2013) was a global survey of ocean ecosystems aboard the Sailing Vessel Tara. It carried out extensive measurements of evironmental conditions and collected plankton (viruses, bacteria, protists and metazoans) for later analysis using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter that were measured from discrete water samples collected with Niskin bottles during the 2009-2013 Tara Oceans expedition. Properties include pigment concentrations from HPLC analysis (10 depths per vertical profile, 25 pigments per depth), the carbonate system (Surface and 400m; pH (total scale), CO2, pCO2, fCO2, HCO3, CO3, Total alkalinity, Total carbon, OmegaAragonite, OmegaCalcite, and dosage Flags), nutrients (10 depths per vertical profile; NO2, PO4, N02/NO3, SI, quality Flags), DOC, CDOM, and dissolved oxygen isotopes. The Service National d'Analyse des Paramètres Océaniques du CO2, at the Université Pierre et Marie Curie, determined CT and AT potentiometrically. More than 200 vertical profiles of these properties were made across the world ocean. DOC, CDOM and dissolved oxygen isotopes are available only for the Arctic Ocean and Arctic Seas (2013).
Resumo:
The Tara Oceans Expedition (2009-2013) was a global survey of ocean ecosystems aboard the Sailing Vessel Tara. It carried out extensive measurements of evironmental conditions and collected plankton (viruses, bacteria, protists and metazoans) for later analysis using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter that were measured from discrete water samples collected with Niskin bottles during the 2009-2013 Tara Oceans expedition. Properties include pigment concentrations from HPLC analysis (10 depths per vertical profile, 25 pigments per depth), the carbonate system (Surface and 400m; pH (total scale), CO2, pCO2, fCO2, HCO3, CO3, Total alkalinity, Total carbon, OmegaAragonite, OmegaCalcite, and dosage Flags), nutrients (10 depths per vertical profile; NO2, PO4, N02/NO3, SI, quality Flags), DOC, CDOM, and dissolved oxygen isotopes. The Service National d'Analyse des Paramètres Océaniques du CO2, at the Université Pierre et Marie Curie, determined CT and AT potentiometrically. More than 200 vertical profiles of these properties were made across the world ocean. DOC, CDOM and dissolved oxygen isotopes are available only for the Arctic Ocean and Arctic Seas (2013).
Resumo:
The Tara Oceans Expedition (2009-2013) was a global survey of ocean ecosystems aboard the Sailing Vessel Tara. It carried out extensive measurements of evironmental conditions and collected plankton (viruses, bacteria, protists and metazoans) for later analysis using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter that were measured from discrete water samples collected with Niskin bottles during the 2009-2013 Tara Oceans expedition. Properties include pigment concentrations from HPLC analysis (10 depths per vertical profile, 25 pigments per depth), the carbonate system (Surface and 400m; pH (total scale), CO2, pCO2, fCO2, HCO3, CO3, Total alkalinity, Total carbon, OmegaAragonite, OmegaCalcite, and dosage Flags), nutrients (10 depths per vertical profile; NO2, PO4, N02/NO3, SI, quality Flags), DOC, CDOM, and dissolved oxygen isotopes. The Service National d'Analyse des Paramètres Océaniques du CO2, at the Université Pierre et Marie Curie, determined CT and AT potentiometrically. More than 200 vertical profiles of these properties were made across the world ocean. DOC, CDOM and dissolved oxygen isotopes are available only for the Arctic Ocean and Arctic Seas (2013).
Resumo:
The Tara Oceans Expedition (2009-2013) was a global survey of ocean ecosystems aboard the Sailing Vessel Tara. It carried out extensive measurements of evironmental conditions and collected plankton (viruses, bacteria, protists and metazoans) for later analysis using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter that were measured from discrete water samples collected with Niskin bottles during the 2009-2013 Tara Oceans expedition. Properties include pigment concentrations from HPLC analysis (10 depths per vertical profile, 25 pigments per depth), the carbonate system (Surface and 400m; pH (total scale), CO2, pCO2, fCO2, HCO3, CO3, Total alkalinity, Total carbon, OmegaAragonite, OmegaCalcite, and dosage Flags), nutrients (10 depths per vertical profile; NO2, PO4, N02/NO3, SI, quality Flags), DOC, CDOM, and dissolved oxygen isotopes. The Service National d'Analyse des Paramètres Océaniques du CO2, at the Université Pierre et Marie Curie, determined CT and AT potentiometrically. More than 200 vertical profiles of these properties were made across the world ocean. DOC, CDOM and dissolved oxygen isotopes are available only for the Arctic Ocean and Arctic Seas (2013).
Resumo:
The Tara Oceans Expedition (2009-2013) was a global survey of ocean ecosystems aboard the Sailing Vessel Tara. It carried out extensive measurements of evironmental conditions and collected plankton (viruses, bacteria, protists and metazoans) for later analysis using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter that were measured from discrete water samples collected with Niskin bottles during the 2009-2013 Tara Oceans expedition. Properties include pigment concentrations from HPLC analysis (10 depths per vertical profile, 25 pigments per depth), the carbonate system (Surface and 400m; pH (total scale), CO2, pCO2, fCO2, HCO3, CO3, Total alkalinity, Total carbon, OmegaAragonite, OmegaCalcite, and dosage Flags), nutrients (10 depths per vertical profile; NO2, PO4, N02/NO3, SI, quality Flags), DOC, CDOM, and dissolved oxygen isotopes. The Service National d'Analyse des Paramètres Océaniques du CO2, at the Université Pierre et Marie Curie, determined CT and AT potentiometrically. More than 200 vertical profiles of these properties were made across the world ocean. DOC, CDOM and dissolved oxygen isotopes are available only for the Arctic Ocean and Arctic Seas (2013).
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
In the reconstruction of sea surface temperature (SST) from sedimentary archives, secondary sources, lateral transport and selective preservation are considered to be mainly negligible in terms of influencing the primary signal. This is also true for the archaeal glycerol dialkyl glycerol tetraethers (GDGTs) that form the basis for the TEX86 SST proxy. Our samples represent four years variability on a transect off Cape Blanc (NW Africa). We studied the subsurface production, vertical and lateral transport of intact polar lipids and core GDGTs in the water column at high vertical resolution on the basis of suspended particulate matter (SPM) samples from the photic zone, the subsurface oxygen minimum zone (OMZ), nepheloid layers (NL) and the water column between these. Furthermore we compared the water column SPM GDGT composition with that in underlying surface sediments. This is the first study that reports TEX86 values from the precursor intact polar lipids (IPLs) associated with specific head groups (IPL -specific TEX86). We show a clear deviation from the sea surface GDGT composition in the OMZ between 300 and 600 m. Since neither lateral transport nor selective degradation provides a satisfactory explanation for the observed TEX-derived temperature profiles with a bias towards higher temperatures for both core- and IPL -specific TEX86 values, we suggest that subsurface in situ production of archaea with a distinct relationship between lipid biosynthesis and temperature is the responsible mechanism. However, in the NW-African upwelling system the GDGT contribution of the OMZ to the surface sediments does not seem to affect the sedimentary TEX86 as it shows no bias and still reflects the signal of the surface waters between 0 and 60 m.
Resumo:
Speech and language ability is not a unitary concept; rather, it is made up of multiple abilities such as grammar, articulation and vocabulary. Young children from socio-economically deprived areas are more likely to experience language difficulties than those living in more affluent areas. However, less is known about individual differences in language difficulties amongst young children from socio-economically deprived backgrounds. The present research examined 172 four-year-old children from socio-economically deprived areas on standardised measures of core language, receptive vocabulary, articulation, information conveyed and grammar. Of the total sample, 26% had difficulty in at least one area of language. While most children with speech and language difficulty had generally low performance in all areas, around one in 10 displayed more uneven language abilities. For example, some children had generally good speech and language ability, but had specific difficulty with grammar. In such cases their difficulty is masked somewhat by good overall performance on language tests but they could still benefit from intervention in a specific area. The analysis also identified a number of typically achieving children who were identified as having borderline speech and language difficulty and should be closely monitored
Resumo:
In the past, many papers have been presented which show that the coating of cutting tools often yields decreased wear rates and reduced coefficients of friction. Although different theories are proposed, covering areas such as hardness theory, diffusion barrier theory, thermal barrier theory, and reduced friction theory, most have not dealt with the question of how and why the coating of tool substrates with hard materials such as Titanium Nitride (TiN), Titanium Carbide (TiC) and Aluminium Oxide (Al203) transforms the performance and life of cutting tools. This project discusses the complex interrelationship that encompasses the thermal barrier function and the relatively low sliding friction coefficient of TiN on an undulating tool surface, and presents the result of an investigation into the cutting characteristics and performance of EDMed surface-modified carbide cutting tool inserts. The tool inserts were coated with TiN by the physical vapour deposition (PVD) method. PVD coating is also known as Ion-plating which is the general term of the coating method in which the film is created by attracting ionized metal vapour in this the metal was Titanium and ionized gas onto negatively biased substrate surface. Coating by PVD was chosen because it is done at a temperature of not more than 5000C whereas chemical Vapour Deposition CVD process is done at very high temperature of about 8500C and in two stages of heating up the substrates. The high temperatures involved in CVD affects the strength of the (tool) substrates. In this study, comparative cutting tests using TiN-coated control specimens with no EDM surface structures and TiN-coated EDMed tools with a crater-like surface topography were carried out on mild steel grade EN-3. Various cutting speeds were investigated, up to an increase of 40% of the tool manufacturer’s recommended speed. Fifteen minutes of cutting were carried out for each insert at the speeds investigated. Conventional tool inserts normally have a tool life of approximately 15 minutes of cutting. After every five cuts (passes) microscopic pictures of the tool wear profiles were taken, in order to monitor the progressive wear on the rake face and on the flank of the insert. The power load was monitored for each cut taken using an on-board meter on the CNC machine to establish the amount of power needed for each stage of operation. The spindle drive for the machine is an 11 KW/hr motor. Results obtained confirmed the advantages of cutting at all speeds investigated using EDMed coated inserts, in terms of reduced tool wear and low power loads. Moreover, the surface finish on the workpiece was consistently better for the EDMed inserts. The thesis discusses the relevance of the finite element method in the analysis of metal cutting processes, so that metal machinists can design, manufacture and deliver goods (tools) to the market quickly and on time without going through the hassle of trial and error approach for new products. Improvements in manufacturing technologies require better knowledge of modelling metal cutting processes. Technically the use of computational models has a great value in reducing or even eliminating the number of experiments traditionally used for tool design, process selection, machinability evaluation, and chip breakage investigations. In this work, much interest in theoretical and experimental investigations of metal machining were given special attention. Finite element analysis (FEA) was given priority in this study to predict tool wear and coating deformations during machining. Particular attention was devoted to the complicated mechanisms usually associated with metal cutting, such as interfacial friction; heat generated due to friction and severe strain in the cutting region, and high strain rates. It is therefore concluded that Roughened contact surface comprising of peaks and valleys coated with hard materials (TiN) provide wear-resisting properties as the coatings get entrapped in the valleys and help reduce friction at chip-tool interface. The contributions to knowledge: a. Relates to a wear-resisting surface structure for application in contact surfaces and structures in metal cutting and forming tools with ability to give wear-resisting surface profile. b. Provide technique for designing tool with roughened surface comprising of peaks and valleys covered in conformal coating with a material such as TiN, TiC etc which is wear-resisting structure with surface roughness profile compose of valleys which entrap residual coating material during wear thereby enabling the entrapped coating material to give improved wear resistance. c. Provide knowledge for increased tool life through wear resistance, hardness and chemical stability at high temperatures because of reduced friction at the tool-chip and work-tool interfaces due to tool coating, which leads to reduced heat generation at the cutting zones. d. Establishes that Undulating surface topographies on cutting tips tend to hold coating materials longer in the valleys, thus giving enhanced protection to the tool and the tool can cut faster by 40% and last 60% longer than conventional tools on the markets today.
Resumo:
The mixing performance of three passive milli-scale reactors with different geometries was investigated at different Reynolds numbers. The effects of design and operating characteristics such as mixing channel shape and volume flow rate were investigated. The main objective of this work was to demonstrate a process design method that uses on Computational Fluid Dynamics (CFD) for modeling and Additive Manufacturing (AM) technology for manufacture. The reactors were designed and simulated using SolidWorks and Fluent 15.0 software, respectively. Manufacturing of the devices was performed with an EOS M-series AM system. Step response experiments with distilled Millipore water and sodium hydroxide solution provided time-dependent concentration profiles. Villermaux-Dushman reaction experiments were also conducted for additional verification of CFD results and for mixing efficiency evaluation of the different geometries. Time-dependent concentration data and reaction evaluation showed that the performance of the AM-manufactured reactors matched the CFD results reasonably well. The proposed design method allows the implementation of new and innovative solutions, especially in the process design phase, for industrial scale reactor technologies. In addition, rapid implementation is another advantage due to the virtual flow design and due to the fast manufacturing which uses the same geometric file formats.
Resumo:
The aim of this thesis was threefold, firstly, to compare current player tracking technology in a single game of soccer. Secondly, to investigate the running requirements of elite women’s soccer, in particular the use and application of athlete tracking devices. Finally, how can game style be quantified and defined. Study One compared four different match analysis systems commonly used in both research and applied settings: video-based time-motion analysis, a semi-automated multiple camera based system, and two commercially available Global Positioning System (GPS) based player tracking systems at 1 Hertz (Hz) and 5 Hz respectively. A comparison was made between each of the systems when recording the same game. Total distance covered during the match for the four systems ranged from 10 830 ± 770 m (semi-automated multiple camera based system) to 9 510 ± 740m (video-based time-motion analysis). At running speeds categorised as high-intensity running (>15 km⋅h-1), the semi-automated multiple camera based system reported the highest distance of 2 650 ± 530 m with video-based time-motion analysis reporting the least amount of distance covered with 1 610 ± 370 m. At speeds considered to be sprinting (>20 km⋅h-1), the video-based time-motion analysis reported the highest value (420 ± 170 m) and 1 Hz GPS units the lowest value (230 ± 160 m). These results demonstrate there are differences in the determination of the absolute distances, and that comparison of results between match analysis systems should be made with caution. Currently, there is no criterion measure for these match analysis methods and as such it was not possible to determine if one system was more accurate than another. Study Two provided an opportunity to apply player-tracking technology (GPS) to measure activity profiles and determine the physical demands of Australian international level women soccer players. In four international women’s soccer games, data was collected on a total of 15 Australian women soccer players using a 5 Hz GPS based athlete tracking device. Results indicated that Australian women soccer players covered 9 140 ± 1 030 m during 90 min of play. The total distance covered by Australian women was less than the 10 300 m reportedly covered by female soccer players in the Danish First Division. However, there was no apparent difference in the estimated "#$%&', as measured by multi-stage shuttle tests, between these studies. This study suggests that contextual information, including the “game style” of both the team and opposition may influence physical performance in games. Study Three examined the effect the level of the opposition had on the physical output of Australian women soccer players. In total, 58 game files from 5 Hz athlete-tracking devices from 13 international matches were collected. These files were analysed to examine relationships between physical demands, represented by total distance covered, high intensity running (HIR) and distances covered sprinting, and the level of the opposition, as represented by the Fédération Internationale de Football Association (FIFA) ranking at the time of the match. Higher-ranking opponents elicited less high-speed running and greater low-speed activity compared to playing teams of similar or lower ranking. The results are important to coaches and practitioners in the preparation of players for international competition, and showed that the differing physical demands required were dependent on the level of the opponents. The results also highlighted the need for continued research in the area of integrating contextual information in team sports and demonstrated that soccer can be described as having dynamic and interactive systems. The influence of playing strategy, tactics and subsequently the overall game style was highlighted as playing a significant part in the physical demands of the players. Study Four explored the concept of game style in field sports such as soccer. The aim of this study was to provide an applied framework with suggested metrics for use by coaches, media, practitioners and sports scientists. Based on the findings of Studies 1- 3 and a systematic review of the relevant literature, a theoretical framework was developed to better understand how a team’s game style could be quantified. Soccer games can be broken into key moments of play, and for each of these moments we categorised metrics that provide insight to success or otherwise, to help quantify and measure different methods of playing styles. This study highlights that to date, there had been no clear definition of game style in team sports and as such a novel definition of game style is proposed that can be used by coaches, sport scientists, performance analysts, media and general public. Studies 1-3 outline four common methods of measuring the physical demands in soccer: video based time motion analysis, GPS at 1 Hz and at 5 Hz and semiautomated multiple camera based systems. As there are no semi-automated multiple camera based systems available in Australia, primarily due to cost and logistical reasons, GPS is widely accepted for use in team sports in tracking player movements in training and competition environments. This research identified that, although there are some limitations, GPS player-tracking technology may be a valuable tool in assessing running demands in soccer players and subsequently contribute to our understanding of game style. The results of the research undertaken also reinforce the differences between methods used to analyse player movement patterns in field sports such as soccer and demonstrate that the results from different systems such as GPS based athlete tracking devices and semi-automated multiple camera based systems cannot be used interchangeably. Indeed, the magnitude of measurement differences between methods suggests that significant measurement error is evident. This was apparent even when the same technologies are used which measure at different sampling rates, such as GPS systems using either 1 Hz or 5 Hz frequencies of measurement. It was also recognised that other factors influence how team sport athletes behave within an interactive system. These factors included the strength of the opposition and their style of play. In turn, these can impact the physical demands of players that change from game to game, and even within games depending on these contextual features. Finally, the concept of what is game style and how it might be measured was examined. Game style was defined as "the characteristic playing pattern demonstrated by a team during games. It will be regularly repeated in specific situational contexts such that measurement of variables reflecting game style will be relatively stable. Variables of importance are player and ball movements, interaction of players, and will generally involve elements of speed, time and space (location)".
Resumo:
After harvest, plants remain living organisms with the capacity to carry out metabolic processes. Thus, from the moment they are detached from the source of nutrients, they become entirely dependent on their own organic reserves [1]. Postharvest changes cannot be stopped, but they can be slowed within certain limits. Therefore, this study was conducted to evaluate the effects induced by storage in the profiles of sugars, organic acids and tocopherols of two leafy vegetables. Wild samples of watercress (Nasturtium officinale R. Br.) and buckler sorrel (Rumex induratus Boiss. & Reut.), from the Northeastern region of Portugal, were analyzed after harvest (control) and after storage in sterilized packages (using the passive modification mode) at 4ºC for 7 or 12 days, respectively. Analyses were performed by high-performance liquid chromatography (HPLC) using different detectors, i.e., a refraction index detector (RID) for free sugars, a photodiode array detector (PDA) for organic acids, and a fluorescence (FP) detector for tocopherols. The storage time decreased the levels of fructose, glucose and total sugars in both leafy vegetables and increased the total organic acids content. The decrease of these sugars can be related to its use by the plant to produce the required energy. Ascorbic acid was detected in buckler sorrel and decreased with storage; while the amount of malic acid increased in both species. Curiously, all the tocopherol isoforms increased in watercress, while buckler sorrel just present higher values of γ- and δ- tocopherols. In fact, the de novo synthesis of these bioactives compounds can be a plant strategy to fight against the reactive species that are produced during storage. The knowledge of the behavior of these compounds during storage that was achieved with this study [2] may contribute to the development of more effective preservation strategies for leafy vegetables.
Resumo:
The Homogeneous Charge Compression Ignition (HCCI) engine is a promising combustion concept for reducing NOx and particulate matter (PM) emissions and providing a high thermal efficiency in internal combustion engines. This concept though has limitations in the areas of combustion control and achieving stable combustion at high loads. For HCCI to be a viable option for on-road vehicles, further understanding of its combustion phenomenon and its control are essential. Thus, this thesis has a focus on both the experimental setup of an HCCI engine at Michigan Technological University (MTU) and also developing a physical numerical simulation model called the Sequential Model for Residual Affected HCCI (SMRH) to investigate performance of HCCI engines. The primary focus is on understanding the effects of intake and exhaust valve timings on HCCI combustion. For the experimental studies, this thesis provided the contributions for development of HCCI setup at MTU. In particular, this thesis made contributions in the areas of measurement of valve profiles, measurement of piston to valve contact clearance for procuring new pistons for further studies of high geometric compression ratio HCCI engines. It also consists of developing and testing a supercharging station and the setup of an electrical air heater to extend the HCCI operating region. The HCCI engine setup is based on a GM 2.0 L LHU Gen 1 engine which is a direct injected engine with variable valve timing (VVT) capabilities. For the simulation studies, a computationally efficient modeling platform has been developed and validated against experimental data from a single cylinder HCCI engine. In-cylinder pressure trace, combustion phasing (CA10, CA50, BD) and performance metrics IMEP, thermal efficiency, and CO emission are found to be in good agreement with experimental data for different operating conditions. Effects of phasing intake and exhaust valves are analyzed using SMRH. In addition, a novel index called Fuel Efficiency and Emissions (FEE) index is defined and is used to determine the optimal valve timings for engine operation through the use of FEE contour maps.