901 resultados para Degree of freedom
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.
Resumo:
Cardiovascular diseases refer to the class of diseases that involve the heart or blood vessels (arteries and veins). Examples of medical devices for treating the cardiovascular diseases include ventricular assist devices (VADs), artificial heart valves and stents. Metallic biomaterials such as titanium and its alloy are commonly used for ventricular assist devices. However, titanium and its alloy show unacceptable thrombosis, which represents a major obstacle to be overcome. Polyurethane (PU) polymer has better blood compatibility and has been used widely in cardiovascular devices. Thus one aim of the project was to coat a PU polymer onto a titanium substrate by increasing the surface roughness, and surface functionality. Since the endothelium of a blood vessel has the most ideal non-thrombogenic properties, it was the target of this research project to grow an endothelial cell layer as a biological coating based on the tissue engineering strategy. However, seeding endothelial cells on the smooth PU coating surfaces is problematic due to the quick loss of seeded cells which do not adhere to the PU surface. Thus it was another aim of the project to create a porous PU top layer on the dense PU pre-layer-coated titanium substrate. The method of preparing the porous PU layer was based on the solvent casting/particulate leaching (SCPL) modified with centrifugation. Without the step of centrifugation, the distribution of the salt particles was not uniform within the polymer solution, and the degree of interconnection between the salt particles was not well controlled. Using the centrifugal treatment, the pore distribution became uniform and the pore interconnectivity was improved even at a high polymer solution concentration (20%) as the maximal salt weight was added in the polymer solution. The titanium surfaces were modified by alkli and heat treatment, followed by functionlisation using hydrogen peroxide. A silane coupling agent was coated before the application of the dense PU pre-layer and the porous PU top layer. The ability of the porous top layer to grow and retain the endothelial cells was also assessed through cell culture techniques. The bonding strengths of the PU coatings to the modified titanium substrates were measured and related to the surface morphologies. The outcome of the project is that it has laid a foundation to achieve the strategy of endothelialisation for the blood compatibility of medical devices. This thesis is divided into seven chapters. Chapter 2 describes the current state of the art in the field of surface modification in cardiovascular devices such as ventricular assist devices (VADs). It also analyses the pros and cons of the existing coatings, particularly in the context of this research. The surface coatings for VADs have evolved from early organic/ inorganic (passive) coatings, to bioactive coatings (e.g. biomolecules), and to cell-based coatings. Based on the commercial applications and the potential of the coatings, the relevant review is focused on the following six types of coatings: (1) titanium nitride (TiN) coatings, (2) diamond-like carbon (DLC) coatings, (3) 2-methacryloyloxyethyl phosphorylcholine (MPC) polymer coatings, (4) heparin coatings, (5) textured surfaces, and (6) endothelial cell lining. Chapter 3 reviews the polymer scaffolds and one relevant fabrication method. In tissue engineering, the function of a polymeric material is to provide a 3-dimensional architecture (scaffold) which is typically used to accommodate transplanted cells and to guide their growth and the regeneration of tissue. The success of these systems is dependent on the design of the tissue engineering scaffolds. Chapter 4 describes chemical surface treatments for titanium and titanium alloys to increase the bond strength to polymer by altering the substrate surface, for example, by increasing surface roughness or changing surface chemistry. The nature of the surface treatment prior to bonding is found to be a major factor controlling the bonding strength. By increasing surface roughness, an increase in surface area occurs, which allows the adhesive to flow in and around the irregularities on the surface to form a mechanical bond. Changing surface chemistry also results in the formation of a chemical bond. Chapter 5 shows that bond strengths between titanium and polyurethane could be significantly improved by surface treating the titanium prior to bonding. Alkaline heat treatment and H2O2 treatment were applied to change the surface roughness and the surface chemistry of titanium. Surface treatment increases the bond strength by altering the substrate surface in a number of ways, including increasing the surface roughness and changing the surface chemistry. Chapter 6 deals with the characterization of the polyurethane scaffolds, which were fabricated using an enhanced solvent casting/particulate (salt) leaching (SCPL) method developed for preparing three-dimensional porous scaffolds for cardiac tissue engineering. The enhanced method involves the combination of a conventional SCPL method and a step of centrifugation, with the centrifugation being employed to improve the pore uniformity and interconnectivity of the scaffolds. It is shown that the enhanced SCPL method and a collagen coating resulted in a spatially uniform distribution of cells throughout the collagen-coated PU scaffolds.In Chapter 7, the enhanced SCPL method is used to form porous features on the polyurethane-coated titanium substrate. The cavities anchored the endothelial cells to remain on the blood contacting surfaces. It is shown that the surface porosities created by the enhanced SCPL may be useful in forming a stable endothelial layer upon the blood contacting surface. Chapter 8 finally summarises the entire work performed on the fabrication and analysis of the polymer-Ti bonding, the enhanced SCPL method and the PU microporous surface on the metallic substrate. It then outlines the possibilities for future work and research in this area.
Resumo:
While spatial determinants of emmetropization have been examined extensively in animal models and spatial processing of human myopes has also been studied, there have been few studies investigating temporal aspects of emmetropization and temporal processing in human myopia. The influence of temporal light modulation on eye growth and refractive compensation has been observed in animal models and there is evidence of temporal visual processing deficits in individuals with high myopia or other pathologies. Given this, the aims of this work were to examine the relationships between myopia (i.e. degree of myopia and progression status) and temporal visual performance and to consider any temporal processing deficits in terms of the parallel retinocortical pathways. Three psychophysical studies investigating temporal processing performance were conducted in young adult myopes and non-myopes: (1) backward visual masking, (2) dot motion perception and (3) phantom contour. For each experiment there were approximately 30 young emmetropes, 30 low myopes (myopia less than 5 D) and 30 high myopes (5 to 12 D). In the backward visual masking experiment, myopes were also classified according to their progression status (30 stable myopes and 30 progressing myopes). The first study was based on the observation that the visibility of a target is reduced by a second target, termed the mask, presented quickly after the first target. Myopes were more affected by the mask when the task was biased towards the magnocellular pathway; myopes had a 25% mean reduction in performance compared with emmetropes. However, there was no difference in the effect of the mask when the task was biased towards the parvocellular system. For all test conditions, there was no significant correlation between backward visual masking task performance and either the degree of myopia or myopia progression status. The dot motion perception study measured detection thresholds for the minimum displacement of moving dots, the maximum displacement of moving dots and degree of motion coherence required to correctly determine the direction of motion. The visual processing of these tasks is dominated by the magnocellular pathway. Compared with emmetropes, high myopes had reduced ability to detect the minimum displacement of moving dots for stimuli presented at the fovea (20% higher mean threshold) and possibly at the inferior nasal retina. The minimum displacement threshold was significantly and positively correlated to myopia magnitude and axial length, and significantly and negatively correlated with retinal thickness for the inferior nasal retina. The performance of emmetropes and myopes for all the other dot motion perception tasks were similar. In the phantom contour study, the highest temporal frequency of the flickering phantom pattern at which the contour was visible was determined. Myopes had significantly lower flicker detection limits (21.8 ± 7.1 Hz) than emmetropes (25.6 ± 8.8 Hz) for tasks biased towards the magnocellular pathway for both high (99%) and low (5%) contrast stimuli. There was no difference in flicker limits for a phantom contour task biased towards the parvocellular pathway. For all phantom contour tasks, there was no significant correlation between flicker detection thresholds and magnitude of myopia. Of the psychophysical temporal tasks studied here those primarily involving processing by the magnocellular pathway revealed differences in performance of the refractive error groups. While there are a number of interpretations for this data, this suggests that there may be a temporal processing deficit in some myopes that is selective for the magnocellular system. The minimum displacement dot motion perception task appears the most sensitive test, of those studied, for investigating changes in visual temporal processing in myopia. Data from the visual masking and phantom contour tasks suggest that the alterations to temporal processing occur at an early stage of myopia development. In addition, the link between increased minimum displacement threshold and decreasing retinal thickness suggests that there is a retinal component to the observed modifications in temporal processing.
Resumo:
Principal Topic : According to Shane & Venkataraman (2000) entrepreneurship consists of the recognition and exploitation of venture ideas - or opportunities as they often called - to create future goods and services. This definition puts venture ideas is at the heart of entrepreneurship research. Substantial research has been done on venture ideas in order to enhance our understanding of this phenomenon (e.g. Choi & Shepherd, 2004; Shane, 2000; Shepherd & DeTienne, 2005). However, we are yet to learn what factors drive entrepreneurs' perceptions of the relative attractiveness of venture ideas, and how important different idea characteristics are for such assessments. Ruef (2002) recognized that there is an uneven distribution of venture ideas undertaken by entrepreneurs in the USA. A majority introduce either a new product/service or access a new market or market segment. A smaller percentage of entrepreneurs introduce a new method of production, organizing, or distribution. This implies that some forms of venture ideas are perceived by entrepreneurs as more important or valuable than others. However, Ruef does not provide any information regarding why some forms of venture ideas are more common than others among entrepreneurs. Therefore, this study empirically investigates what factors affect the attractiveness of venture ideas as well as their relative importance. Based on two key characteristics of venture ideas, namely venture idea newness and relatedness, our study investigates how different types and degrees of newness and relatedness of venture ideas affect their attractiveness as perceived by expert entrepreneurs. Methodology/Key : Propositions According to Schumpeter (1934) entrepreneurs introduce different types of venture ideas such as new products/services, new method of production, enter into new markets/customer and new method of promotion. Further, according to Schumpeter (1934) and Kirzner (1973) venture ideas introduced to the market range along a continuum of innovative to imitative ideas. The distinction between these two extremes of venture idea highlights an important property of venture idea, namely their newness. Entrepreneurs, in order to gain competitive advantage or above average returns introduce their venture ideas which may be either new to the world, new to the market that they seek to enter, substantially improved from current offerings and an imitative form of existing offerings. Expert entrepreneurs may be more attracted to venture ideas that exhibit high degree of newness because of the higher newness is coupled with increased market potential (Drucker, 1985) Moreover, certain individual characteristics also affect the attractiveness of venture idea. According to Shane (2000), individual's prior knowledge is closely associated with the recognition of venture ideas. Sarasvathy's (2001) Effectuation theory proposes a high degree of relatedness between venture ideas and the resource position of the individual. Thus, entrepreneurs may be more attracted to venture ideas that are closely aligned with the knowledge and/or resources they already possess. On the other hand, the potential financial gain (Shepherd & DeTienne, 2005) may be larger for ideas that are not close to the entrepreneurs' home turf. Therefore, potential financial gain is a stimulus that has to be considered separately. We aim to examine how entrepreneurs weigh considerations of different forms of newness and relatedness as well as potential financial gain in assessing the attractiveness of venture ideas. We use conjoint analysis to determine how expert entrepreneurs develop preferences for venture ideas which involved with different degrees of newness, relatedness and potential gain. This analytical method paves way to measure the trade-offs they make when choosing a particular venture idea. The conjoint analysis estimates respondents' preferences in terms of utilities (or part-worth) for each level of newness, relatedness and potential gain of venture ideas. A sample of 50 expert entrepreneurs who were awarded young entrepreneurship awards in Sri Lanka in 2007 is used for interviews. Each respondent is interviewed providing with 32 scenarios which explicate different combinations of possible profiles open them into consideration. Conjoint software (SPSS) is used to analyse data. Results and Implications : The data collection of this study is still underway. However, results of this study will provide information regarding the attractiveness of each level of newness, relatedness and potential gain of venture idea and their relative importance in a business model. Additionally, these results provide important implications for entrepreneurs, consultants and other stakeholders as regards the importance of different of attributes of venture idea coupled with different levels. Entrepreneurs, consultants and other stakeholders could make decisions accordingly.
Resumo:
Much research has investigated the differences between option implied volatilities and econometric model-based forecasts. Implied volatility is a market determined forecast, in contrast to model-based forecasts that employ some degree of smoothing of past volatility to generate forecasts. Implied volatility has the potential to reflect information that a model-based forecast could not. This paper considers two issues relating to the informational content of the S&P 500 VIX implied volatility index. First, whether it subsumes information on how historical jump activity contributed to the price volatility, followed by whether the VIX reflects any incremental information pertaining to future jump activity relative to model-based forecasts. It is found that the VIX index both subsumes information relating to past jump contributions to total volatility and reflects incremental information pertaining to future jump activity. This issue has not been examined previously and expands our understanding of how option markets form their volatility forecasts.
Resumo:
Biological inspiration has produced some successful solutions for estimation of self motion from visual information. In this paper we present the construction of a unique new camera, inspired by the compound eye of insects. The hemispherical nature of the compound eye has some intrinsically valuable properties in producing optical flow fields that are suitable for egomotion estimation in six degrees of freedom. The camera that we present has the added advantage of being lightweight and low cost, making it suitable for a range of mobile robot applications. We present some initial results that show the effectiveness of our egomotion estimation algorithm and the image capture capability of the hemispherical camera.
Resumo:
Participatory design has the moral and pragmatic tenet of including those who will be most affected by a design into the design process. However, good participation is hard to achieve and results linking project success and degree of participation are inconsistent. Through three case studies examining some of the challenges that different properties of knowledge - novelty, difference, dependence - can impose on the participatory endeavour we examine some of the consequences to the participatory process of failing to bridge across knowledge boundaries - syntactic, semantic, and pragmatic. One pragmatic consequence, disrupting the user's feeling of involvement to the project, has been suggested as a possible explanation for the inconsistent results linking participation and project success. To aid in addressing these issues a new form of participatory research, called embedded research, is proposed and examined within the framework of the case studies and knowledge framework with a call for future research into its possibilities.
Resumo:
Poly(vinylidene fluoride) and copolymers of vinylidene fluoride with hexafluoropropylene, trifluoroethylene and chlorotrifluoroethylene have been exposed to gamma irradiation in vacuum, up to doses of 1MGy under identical conditions, to obtain a ranking of radiation sensitivities. Changes in the tensile properties, crystalline melting points,heats of fusion, gel contents and solvent uptake factors were used as the defining parameters. The initial degree of crystallinity and film processing had the greatest influence on relative radiation damage, although the cross-linked network features were almost identical in their solvent swelling characteristics, regardless of the comonomer composition or content.
Resumo:
High renewal and maintenance of multipotency of human adult stem cells (hSCs), are a prerequisite for experimental analysis as well as for potential clinical usages. The most widely used strategy for hSC culture and proliferation is using serum. However, serum is poorly defined and has a considerable degree of inter-batch variation, which makes it difficult for large-scale mesenchymal stem cells (MSCs) expansion in homogeneous culture conditions. Moreover, it is often observed that cells grown in serum-containing media spontaneously differentiate into unknown and/or undesired phenotypes. Another way of maintaining hSC development is using cytokines and/or tissue-specific growth factors; this is a very expensive approach and can lead to early unwanted differentiation. In order to circumvent these issues, we investigated the role of sphingosine-1-phosphate (S1P), in the growth and multipotency maintenance of human bone marrow and adipose tissue-derived MSCs. We show that S1P induces growth, and in combination with reduced serum, or with the growth factors FGF and platelet-derived growth factor-AB, S1P has an enhancing effect on growth. We also show that the MSCs cultured in S1P-supplemented media are able to maintain their differentiation potential for at least as long as that for cells grown in the usual serum-containing media. This is shown by the ability of cells grown in S1P-containing media to be able to undergo osteogenic as well as adipogenic differentiation. This is of interest, since S1P is a relatively inexpensive natural product, which can be obtained in homogeneous high-purity batches: this will minimize costs and potentially reduce the unwanted side effects observed with serum. Taken together, S1P is able to induce proliferation while maintaining the multipotency of different human stem cells, suggesting a potential for S1P in developing serum-free or serum-reduced defined medium for adult stem cell cultures.
Resumo:
Thermogravimetric analysis-mass spectrometry, X-ray diffraction and scanning electron microscopy (SEM) were used to characterize eight kaolinite samples from China. The results show that the thermal decomposition occurs in three main steps (a) desorption of water below 100 °C, (b) dehydration at about 225 °C, (c) well defined dehydroxylation at around 450 °C. It is also found that decarbonization took place at 710 °C due to the decomposition of calcite impurity in kaolin. The temperature of dehydroxylation of kaolinite is found to be influenced by the degree of disorder of the kaolinite structure and the gases evolved in the decomposition process can be various because of the different amount and kinds of impurities. It is evident by the mass spectra that the interlayer carbonate from impurity of calcite and organic carbon is released as CO2 around 225, 350 and 710 °C in the kaolinite samples.
Resumo:
This present paper reviews the reliability and validity of visual analogue scales (VAS) in terms of (1) their ability to predict feeding behaviour, (2) their sensitivity to experimental manipulations, and (3) their reproducibility. VAS correlate with, but do not reliably predict, energy intake to the extent that they could be used as a proxy of energy intake. They do predict meal initiation in subjects eating their normal diets in their normal environment. Under laboratory conditions, subjectively rated motivation to eat using VAS is sensitive to experimental manipulations and has been found to be reproducible in relation to those experimental regimens. Other work has found them not to be reproducible in relation to repeated protocols. On balance, it would appear, in as much as it is possible to quantify, that VAS exhibit a good degree of within-subject reliability and validity in that they predict with reasonable certainty, meal initiation and amount eaten, and are sensitive to experimental manipulations. This reliability and validity appears more pronounced under the controlled (but more arti®cial) conditions of the laboratory where the signal : noise ratio in experiments appears to be elevated relative to real life. It appears that VAS are best used in within-subject, repeated-measures designs where the effect of different treatments can be compared under similar circumstances. They are best used in conjunction with other measures (e.g. feeding behaviour, changes in plasma metabolites) rather than as proxies for these variables. New hand-held electronic appetite rating systems (EARS) have been developed to increase reliability of data capture and decrease investigator workload. Recent studies have compared these with traditional pen and paper (P&P) VAS. The EARS have been found to be sensitive to experimental manipulations and reproducible relative to P&P. However, subjects appear to exhibit a signi®cantly more constrained use of the scale when using the EARS relative to the P&P. For this reason it is recommended that the two techniques are not used interchangeably
Resumo:
Exercise is known to cause physiological changes that could affect the impact of nutrients on appetite control. This study was designed to assess the effect of drinks containing either sucrose or high-intensity sweeteners on food intake following exercise. Using a repeated-measures design, three drink conditions were employed: plain water (W), a low-energy drink sweetened with artificial sweeteners aspartame and acesulfame-K (L), and a high-energy, sucrose-sweetened drink (H). Following a period of challenging exercise (70% VO2 max for 50 min), subjects consumed freely from a particular drink before being offered a test meal at which energy and nutrient intakes were measured. The degree of pleasantness (palatability) of the drinks was also measured before and after exercise. At the test meal, energy intake following the artificially sweetened (L) drink was significantly greater than after water and the sucrose (H) drinks (p < 0.05). Compared with the artificially sweetened (L) drink, the high-energy (H) drink suppressed intake by approximately the energy contained in the drink itself. However, there was no difference between the water (W) and the sucrose (H) drink on test meal energy intake. When the net effects were compared (i.e., drink + test meal energy intake), total energy intake was significantly lower after the water (W) drink compared with the two sweet (L and H) drinks. The exercise period brought about changes in the perceived pleasantness of the water, but had no effect on either of the sweet drinks. The remarkably precise energy compensation demonstrated after the higher energy sucrose drink suggests that exercise may prime the system to respond sensitively to nutritional manipulations. The results may also have implications for the effect on short-term appetite control of different types of drinks used to quench thirst during and after exercise.
Resumo:
Summary There are four interactions to consider between energy intake (EI) and energy expenditure (EE) in the development and treatment of obesity. (1) Does sedentariness alter levels of EI or subsequent EE? and (2) Do high levels of EI alter physical activity or exercise? (3) Do exercise-induced increases in EE drive EI upwards and undermine dietary approaches to weight management and (4) Do low levels of EI elevate or decrease EE? There is little evidence that sedentariness alters levels of EI. This lack of cross-talk between altered EE and EI appears to promote a positive EB. Lifestyle studies also suggest that a sedentary routine actually offers the opportunity for over-consumption. Substantive changes in non exercise activity thermogenesis are feasible, but not clearly demonstrated. Cross talk between elevated EE and EI is initially too weak and takes too long to activate, to seriously threaten dietary approaches to weight management. It appears that substantial fat loss is possible before intake begins to track a sustained elevation of EE. There is more evidence that low levels of EI does lower physical activity levels, in relatively lean men under conditions of acute or prolonged semi-starvation and in dieting obese subjects. During altered EB there are a number of small but significant changes in the components of EE, including (i) sleeping and basal metabolic rate, (ii) energy cost of weight change alters as weight is gained or lost, (iii) exercise efficiency, (iv) energy cost of weight bearing activities, (v) during substantive overfeeding diet composition (fat versus carbohydrate) will influence the energy cost of nutrient storage by ~ 15%. The responses (i-v) above are all “obligatory” responses. Altered EB can also stimulate facultative behavioural responses, as a consequence of cross-talk between EI and EE. Altered EB will lead to changes in the mode duration and intensity of physical activities. Feeding behaviour can also change. The degree of inter-individual variability in these responses will define the scope within which various mechanisms of EB compensation can operate. The relative importance of “obligatory” versus facultative, behavioural responses -as components of EB control- need to be defined.
Resumo:
Purpose of review: To examine the relationship between energy intake, appetite control and exercise, with particular reference to longer term exercise studies. This approach is necessary when exploring the benefits of exercise for weight control, as changes in body weight and energy intake are variable and reflect diversity in weight loss. Recent findings: Recent evidence indicates that longer term exercise is characterized by a highly variable response in eating behaviour. Individuals display susceptibility or resistance to exercise-induced weight loss, with changes in energy intake playing a key role in determining the degree of weight loss achieved. Marked differences in hunger and energy intake exist between those who are capable of tolerating periods of exercise-induced energy deficit, and those who are not. Exercise-induced weight loss can increase the orexigenic drive in the fasted state, but for some this is offset by improved postprandial satiety signalling. Summary: The biological and behavioural responses to acute and long-term exercise are highly variable, and these responses interact to determine the propensity for weight change. For some people, long-term exercise stimulates compensatory increases in energy intake that attenuate weight loss. However, favourable changes in body composition and health markers still exist in the absence of weight loss. The physiological mechanisms that confer susceptibility to compensatory overconsumption still need to be determined.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.