61 resultados para Component behaviour


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to understand how organizational knowledge governance mechanisms affect individual motivation, opportunity, and the ability to share knowledge (MOA framework), and further, how individual knowledge-sharing conditions affect actual knowledge sharing behaviour. The study followed the knowledge governance approach and a micro-foundations perspective to develop a theoretical model and hypotheses, which could explain the casual relationships between knowledge governance mechanisms, individual knowledge sharing conditions, and individual knowledge sharing behaviour. The quantitative research strategy and multivariate data analysis techniques (SEM) were used in the hypotheses testing with a survey dataset of 256 employees from eleven military schools of Finnish Defence Forces (FDF). The results showed that “performance-based feedback and rewards” affects employee’s “intrinsic motivation towards knowledge sharing”, that “lateral coordination” affects employee’s “knowledge self-efficacy”, and that ”training and development” is positively related to “time availability” for knowledge sharing but affects negatively employee’s knowledge self-efficacy. Individual motivation and knowledge self-efficacy towards knowledge sharing affected knowledge sharing behaviour when work-related knowledge was shared 1) between employees in a department and 2) between employees in different departments, however these factors did not play a crucial role in subordinate–superior knowledge sharing. The findings suggest that individual motivation, opportunity, and the ability towards knowledge sharing affects individual knowledge sharing behaviour differently in different knowledge sharing situations. Furthermore, knowledge governance mechanisms can be used to manage individual-level knowledge sharing conditions and individual knowledge sharing behaviour but their affect also vary in different knowledge sharing situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the influence of cultural distance on entrepreneurs’ negotiation behaviour. For this purpose, Turku was chosen as the unit of analysis due to the exponential demographic change experienced during the last two decades that has derived in a more diversified local environment. The research aim set for this study was to identify to what extent entrepreneurs face cultural distance, how cultural distance influences the entrepreneur’s negotiation behaviour and how can it be addressed in order to turn dissimilarities into opportunities. This study presented the relation and apparent dichotomy of cultural distance and global culture, including the component of diversity. The impact of cultural distance in the entrepreneurial mindset and its consequent effect in negotiation behaviour was presented too. Addressing questions about the way individuals perceive, behave and interact allowed the use of interviews for this qualitative research study. In the empirical part of this study it was found that negotiation behaviour differed in terms of how congenial entrepreneurs felt when managing cultural distance, encompassing their performance. It was also acknowledged that after time and effort, some of the personal traits were enhanced while others reduced, allowing for more flexibility and adaptation. Furthermore, depending on the level of trust and shared interests, entrepreneurs determined their attitudinal approach, being adaptive or reactive subject to situational aspects. Additionally, it was found that the acquisition of cultural savvy not necessarily conveyed to more creativity. This experiential learning capability led to the proposition of new ways of behaviour. Likewise, it was proposed that growing cultural intelligence bridge distances, reducing mistrusts and misunderstandings. The capability of building more collaborative relationships allows entrepreneurs to see cultural distance as a cultural perspective instead of as a threat. Therefore it was recommended to focus on proximity rather than distance to better identify and exploit untapped opportunities and better perform when negotiating in whichever cultural conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the doctoral dissertation, low-voltage direct current (LVDC) distribution system stability, supply security and power quality are evaluated by computational modelling and measurements on an LVDC research platform. Computational models for the LVDC network analysis are developed. Time-domain simulation models are implemented in the time-domain simulation environment PSCAD/EMTDC. The PSCAD/EMTDC models of the LVDC network are applied to the transient behaviour and power quality studies. The LVDC network power loss model is developed in a MATLAB environment and is capable of fast estimation of the network and component power losses. The model integrates analytical equations that describe the power loss mechanism of the network components with power flow calculations. For an LVDC network research platform, a monitoring and control software solution is developed. The solution is used to deliver measurement data for verification of the developed models and analysis of the modelling results. In the work, the power loss mechanism of the LVDC network components and its main dependencies are described. Energy loss distribution of the LVDC network components is presented. Power quality measurements and current spectra are provided and harmonic pollution on the DC network is analysed. The transient behaviour of the network is verified through time-domain simulations. DC capacitor guidelines for an LVDC power distribution network are introduced. The power loss analysis results show that one of the main optimisation targets for an LVDC power distribution network should be reduction of the no-load losses and efficiency improvement of converters at partial loads. Low-frequency spectra of the network voltages and currents are shown, and harmonic propagation is analysed. Power quality in the LVDC network point of common coupling (PCC) is discussed. Power quality standard requirements are shown to be met by the LVDC network. The network behaviour during transients is analysed by time-domain simulations. The network is shown to be transient stable during large-scale disturbances. Measurement results on the LVDC research platform proving this are presented in the work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vibrations in machines can cause noise, decrease the performance, or even damage the machine. Vibrations appear if there is a source of vibration that excites the system. In the worst case scenario, the excitation frequency coincides with the natural frequency of the machine causing resonance. Rotating machines are a machine type, where the excitation arises from the machine itself. The excitation originates from the mass imbalance in the rotating shaft, which always exists in machines that are manufactured using conventional methods. The excitation has a frequency that is dependent on the rotational speed of the machine. The rotating machines in industrial use are usually designed to rotate at a constant rotational speed, the case where the resonances can be easily avoided. However, the machines that have a varying operational speed are more problematic due to a wider range of frequencies that have to be avoided. Vibrations, which frequencies equal to rotational speed frequency of the machine are widely studied and considered in the typical machine design process. This study concentrates on vibrations, which arise from the excitations having frequencies that are multiples of the rotational speed frequency. These vibrations take place when there are two or more excitation components in a revolution of a rotating shaft. The dissertation introduces four studies where three kinds of machines are experiencing vibrations caused by different excitations. The first studied case is a directly driven permanent magnet generator used in a wind power plant. The electromagnetic properties of the generator cause harmonic excitations in the system. The dynamic responses of the generator are studied using the multibody dynamics formulation. In another study, the finite element method is used to study the vibrations of a magnetic gear due to excitations, which frequencies equal to the rotational speed frequency. The objective is to study the effects of manufacturing and assembling inaccuracies. Particularly, the eccentricity of the rotating part with respect to non-rotating part is studied since the eccentric operation causes a force component in the direction of the shortest air gap. The third machine type is a tube roll of a paper machine, which is studied while the tube roll is supported using two different structures. These cases are studied using different formulations. In the first case, the tube roll is supported by spherical roller bearings, which have some wavinesses on the rolling surfaces. Wavinesses cause excitations to the tube roll, which starts to resonate at the frequency that is a half of the first natural frequency. The frequency is in the range where the machine normally operates. The tube roll is modeled using the finite element method and the bearings are modeled as nonlinear forces between the tube roll and the pedestals. In the second case studied, the tube roll is supported by freely rotating discs, which wavinesses are also measured. The above described phenomenon is captured as well in this case, but the simulation methodology is based on the flexible multibody dynamics formulation. The simulation models that are used in both of the last two cases studied are verified by measuring the actual devices and comparing the simulated and measured results. The results show good agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Breast cancer is a highly heterogenous malignancy, which despite of the similar histological type shows different clinical behaviour and response to therapy. Prognostic factors are used to estimate the risk for recurrence and the likelihood of treatment effectiveness. Because breast cancer is one of the most common causes of cancer death in women worldwide, identification of new prognostic markers are needed to develop more specific and targeted therapies. Cancer is caused by uncontrolled cell proliferation. The cell cycle is controlled by specific proteins, which are known as cyclins. They function at important checkpoints by activating cyclin-dependent kinase enzymes. Overexpression of different cyclins has been linked to several cancer types and altered expression of cyclins A, B1, D1 and E has been associated with poor survival. Little is known about the combined expression of cyclins in relation to the tumour grade, breast cancer subtype and other known prognostic factors. In this study cyclins A, B1 and E were shown to correlate with histological grade, Ki-67 and HER2 expression. Overexpression of cyclin D1 correlated with receptor status and non-basal breast cancer suggesting that cyclin D1 might be a marker of good prognosis. Proteolysis in the surrounding tumour stroma is increased during cancer development. Matrix metalloproteinases (MMPs) are proteolytic enzymes that are capable of degrading extracellular matrix proteins. Increased expression and activation of several MMPs have been found in many cancers and MMPs appear to be important regulators of invasion and metastasis. In this study MMP-1 expression was analysed in breast cancer epithelial cells and in cancer associated stromal cells. MMP-1 expression by breast cancer epithelial cells was found to carry an independent prognostic value as did Ki-67 and bcl-2. The results suggest that in addition to stromal cells MMP-1 expression in tumour cells control breast cancer progression. Decorin is a small proteoglycan and an important component of the extracellular matrix. Decorin has been shown to inhibit growth of tumour cells and reduced decorin expression is associated with a poor prognosis in several cancer types. There has been some suspicion wheather different cancer cells express decorin. In this study decorin expression was shown to localize only in the cells of the original stroma, while breast cancer epithelial cells were negative for decorin expression. However, transduction of decorin in decorin-negative human breast cancer cells markedly modulated the growth pattern of these cells. This study provides evidence that targeted decorin transduction to breast cancer cells could be used as a novel adjuvant therapy in breast malignancies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Keyhole welding, meaning that the laser beam forms a vapour cavity inside the steel, is one of the two types of laser welding processes and currently it is used in few industrial applications. Modern high power solid state lasers are becoming more used generally, but not all process fundamentals and phenomena of the process are well known and understanding of these helps to improve quality of final products. This study concentrates on the process fundamentals and the behaviour of the keyhole welding process by the means of real time high speed x-ray videography. One of the problem areas in laser welding has been mixing of the filler wire into the weld; the phenomena are explained and also one possible solution for this problem is presented in this study. The argument of this thesis is that the keyhole laser welding process has three keyhole modes that behave differently. These modes are trap, cylinder and kaleidoscope. Two of these have sub-modes, in which the keyhole behaves similarly but the molten pool changes behaviour and geometry of the resulting weld is different. X-ray videography was used to visualize the actual keyhole side view profile during the welding process. Several methods were applied to analyse and compile high speed x-ray video data to achieve a clearer image of the keyhole side view. Averaging was used to measure the keyhole side view outline, which was used to reconstruct a 3D-model of the actual keyhole. This 3D-model was taken as basis for calculation of the vapour volume inside of the keyhole for each laser parameter combination and joint geometry. Four different joint geometries were tested, partial penetration bead on plate and I-butt joint and full penetration bead on plate and I-butt joint. The comparison was performed with selected pairs and also compared all combinations together.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At present, one of the main concerns of green network is to minimize the power consumption of network infrastructure. Surveys show that, the highest amount of power is consumed by the network devices during its runtime. However to control this power consumption it is important to know which factors has highest impact on this matter. This paper is focused on the measurement and modeling the power consumption of an Ethernet switch during its runtime considering various types of input parameters with all possible combinations. For the experiment, three input parameters are chosen. They are bandwidth, link load and number of connections. The output to be measured is the power consumption of the Ethernet switch. Due to the uncertain power consuming pattern of the Ethernet switch a fully-comprehensive experimental evaluation would require an unfeasible and cumbersome experimental phase. Because of that, design of experiment (DoE) method has been applied to obtain adequate information on the effects of each input parameters on the power consumption. The whole work consists of three parts. In the first part a test bed is planned with input parameters and the power consumption of the switch is measured. The second part is about generating a mathematical model with the help of design of experiment tools. This model can be used for measuring precise power consumption in different scenario and also pinpoint the parameters with higher influence in power consumption. And in the last part, the mathematical model is evaluated by comparing with the experimental values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän tutkielman aiheena on ammattikääntäjien tiedonhaku, kun käytettävissä on ainoastaan verkkolähteitä. Tutkimuksessa on tarkasteltu, mistä ja miten ammattikääntäjät etsivät tietoa internetistä kääntäessään lähtötekstiä englannista suomeen. Lisäksi tutkimuksen tarkoituksena on osoittaa, että tiedonhakutaidot ja lähdekriittisyys ovat käännöskompetensseja, joita tulisi sekä ylläpitää että opettaa osana kääntäjäkoulutusta. Tutkimuksen aineisto kerättiin empiirisesti käyttämällä kolmea metodia. Käännösprosessi ja sen aikana tapahtunut tiedonhaku tallennettiin käyttäen Camtasia-näyttövideointiohjelmaa ja Translog-II -näppäilyntallennusohjelmaa. Lisäksi tutkimukseen osallistuneet kääntäjät täyttivät kaksi kyselyä, joista ensimmäinen sisälsi taustatietokysymyksiä ja toinen itse prosessiin liittyviä retrospektiivisiä kysymyksiä. Kyselyt toteutettiin Webropol-kyselytyökalulla. Aineistoa kerättiin yhteensä viidestä koetilanteesta. Tutkimuksessa tarkasteltiin lähemmin kolmen ammattikääntäjän tiedon-hakutoimintoja erottelemalla käännösprosesseista ne tauot, joiden aikana kääntäjät etsivät tietoa internetistä. Käytettyjen verkkolähteiden osalta tutkimuksessa saatiin vastaavia tuloksia kuin aiemmissakin tutkimuksissa: eniten käytettyjä olivat Google, Wikipedia sekä erilaiset verkkosanakirjat. Tässä tutkimuksessa kuitenkin paljastui, että ammattikääntäjien tiedonhaun toimintamallit vaihtelevat riippuen niin kääntäjän erikoisalasta kuin hänen tiedonhakutaitojensa tasosta. Joutuessaan työskentelemään tutun työympäristönsä ja oman erikoisalansa ulkopuolella turvautuu myös osa ammattikääntäjistä alkeellisimpiin tiedonhakutekniikoihin, joita käännöstieteen opiskelijoiden on havaittu yleisesti käyttävän. Tulokset paljastivat myös, että tiedonhaku voi viedä jopa 70 prosenttia koko käännösprosessiin kuluvasta ajasta riippuen kääntäjän aiemmasta lähtötekstin aihepiiriin liittyvästä tietopohjasta ja tiedonhaun tehokkuudesta. Tutkimuksessa saatujen tulosten pohjalta voidaan sanoa, että myös ammattikääntäjien tulisi kehittää tiedonhakutaitojaan pitääkseen käännösprosessinsa tehokkaana. Lisäksi kääntäjien pitäisi muistaa arvioida kriittisesti käyttämiään tietolähteitä: lähdekritiikki on tarpeen erityisesti verkkolähteitä käytettäessä. Tästä syystä tiedonhakutaitoja ja lähdekriittisyyttä tulisikin opettaa ja harjoitella jo osana kääntäjäkoulutusta. Kääntäjien ei myöskään pidä jättää tiedonhakua pelkkien verkkolähteiden varaan, vaan jatkossakin käyttää hyväkseen niin painettuja tietolähteitä kuin myös henkilölähteitä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis was to examine how aquatic organisms, such as fish, behave in an altered environmental condition. Many species of fish use vision as their primary tool to gain information about their surrounding environment. The visual conditions of aquatic habitats are often altered as a result of anthropogenic disturbance, such as eutrophication that initiates algal turbidity. In general, turbidity reduces the visibility and can be hypothesized to have an influence on the behaviour of fish. I used the three-spined stickleback (Gasterosteus aculeatus) as a model species and conducted four studies in the laboratory to test how algal turbidity affects its behaviour. In this thesis, two major behavioural aspects are discussed. The first is antipredator behaviour. In study I, the combined effects of turbidity and shoot density on habitat choice (shelter vs open) behaviour was tested on a group of sticklebacks (20 fish) in the presence and absence of piscivorous perch (Perca fluviatilis). In study II, I examined the behavioural responses of feeding sticklebacks when they were exposed to the sudden appearance of an avian predator (the silhouette of a common tern, Sterna hirundo). The study was done in turbid and clear water using three different groups sizes (1, 3 and 6 fish). The second aspect is foraging behaviour. Study III & IV focused on the effects of algal turbidity on the foraging performance of sticklebacks. In study III, I conducted two separate experiments to examine the effects of turbidity on prey consumption and prey choice of sticklebacks. In this experiment turbidity levels and the proportion of large and small prey (Daphnia spp.) were manipulated. In study IV, I studied whether a group of six sticklebacks can distribute themselves according to food input at two feeding stations in a way that provided each fish with the same amount of food in clear and turbid water. I also observed whether the fish can follow changes in resource distribution between the foraging patches. My results indicate an overall influence of algal turbidity on the antipredator and foraging behaviour of sticklebacks. In the presence of a potential predator, the use of the sheltered habitat was more pronounced at higher turbidity. Besides this, sticklebacks reduced their activity levels with predator presence at higher turbidity and shoot density levels, suggesting a possible antipredator adaptation to avoid a predator. When exposed to a sudden appearance of an avian predator, sticklebacks showed a weaker antipredator response in turbid water, which suggests that turbidity degrades the risk assessment capabilities of sticklebacks. I found an effect of group size but not turbidity in the proportion of sticklebacks that fled to the shelter area, which indicates that sticklebacks are able to communicate among group members at the experimental turbidity levels. I found an overall negative effect of turbidity on food intake. Both turbidity and changes in the proportion of prey sizes played a significant role in a stickleback’s prey selection. At lower turbidity levels (clear <1 and 5 NTU) sticklebacks showed preferences for large prey, whereas in more turbid conditions and when the proportion of large to small prey increased sticklebacks became increasingly random in their prey selection. Finally, my results showed that groups of sticklebacks disperse themselves between feeding stations according to the reward ratios following the predictions of the ideal free distribution theory. However, they took a significantly longer time to reach the equilibrium distribution in turbid water than in clear water. In addition, they showed a slower response to changes in resource distribution in a turbid environment. These findings suggest that turbidity interferes with the information transfer among group foragers. It is important to understand that aquatic animals are often exposed to a degraded environment. The findings of this thesis suggest that algal turbidity negatively affects their behavioural performance. The results also shed light on the underlying behavioural strategies of sticklebacks in turbid conditions that might help them adapt to an altered environmental situation and increase their survival. In conclusion, I hold that although algal turbidity has detrimental effects on the antipredator and foraging behaviour of sticklebacks, their behavioural adjustment might help them adapt to a changing environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Physical inactivity and positive energy balance pose a risk to health. They increase the risk of obesity and associated non-communicable diseases. Recently, also sedentary behaviour has been associated with obesity and non-communicable diseases. Nevertheless, it has been unclear which type of sedentary behaviour is the most harmful. It is also unknown whether the relationship of sedentary behaviour with obesity is truly independent of other factors, for example physical activity and diet. Longitudinal data are limited, and the direction of causality and the mechanism of action are still unknown. Aims: The aim of this study was 1) to identify the type of sedentary behaviour having the strongest association with obesity, 2) to explore the causal relationship of sedentary behaviour and weight increase, and 3) to additionally, investigate the relationship of sedentary behaviour with fatty liver. These were studied in cross-sectional and/or longitudinal settings using data from the Cardiovascular Risk in Young Finns Study. Special emphasis was put on the evaluation of a wide range of other lifestyle factors and risks for obesity and fatty liver. Subjects: 2,060 subjects (aged 33-50 years in 2011, of which 55 % were female) from the Cardiovascular Risk in Young Finns Study participating in follow-ups in 2001, 2007, and 2011. Measures: Self-reported time spent in various types of sedentary behaviour (I), or TV viewing time (I-III). Measured body weight, height and waist circumference (I-III), and genetic variants for high BMI (I). Fasting plasma concentrations of gamma-glutamyltransferase enzyme and triglyceride, calculated Fatty Liver Index (based on gamma-glutamyltransferase and triglyceride concentration, BMI and waist circumference), and the amount of intrahepatic fat measured with ultrasound (III). Self-reported leisure-time physical activity and active commuting, occupational physical activity, energy intake, diet, alcohol consumption, smoking, socioeconomic status, and sleep duration as possible confounders were considered (I-III). Results: TV viewing is the sedentary behaviour type that has the strongest association with obesity. Sedentary behaviour (TV viewing) precedes weight increase, and not the other way around. Sedentary behaviour (TV viewing) is associated with increased risk of fatty liver. Conclusions: Sedentary behaviour (especially high TV viewing time) is associated with increased risks of obesity and fatty liver. Intervention studies are needed to assess whether reduction of TV time would prevent obesity and fatty liver.