952 resultados para Task analysis.
Resumo:
Broadcasting systems are networks where the transmission is received by several terminals. Generally broadcast receivers are passive devices in the network, meaning that they do not interact with the transmitter. Providing a certain Quality of Service (QoS) for the receivers in heterogeneous reception environment with no feedback is not an easy task. Forward error control coding can be used for protection against transmission errors to enhance the QoS for broadcast services. For good performance in terrestrial wireless networks, diversity should be utilized. The diversity is utilized by application of interleaving together with the forward error correction codes. In this dissertation the design and analysis of forward error control and control signalling for providing QoS in wireless broadcasting systems are studied. Control signaling is used in broadcasting networks to give the receiver necessary information on how to connect to the network itself and how to receive the services that are being transmitted. Usually control signalling is considered to be transmitted through a dedicated path in the systems. Therefore, the relationship of the signaling and service data paths should be considered early in the design phase. Modeling and simulations are used in the case studies of this dissertation to study this relationship. This dissertation begins with a survey on the broadcasting environment and mechanisms for providing QoS therein. Then case studies present analysis and design of such mechanisms in real systems. The mechanisms for providing QoS considering signaling and service data paths and their relationship at the DVB-H link layer are analyzed as the first case study. In particular the performance of different service data decoding mechanisms and optimal signaling transmission parameter selection are presented. The second case study investigates the design of signaling and service data paths for the more modern DVB-T2 physical layer. Furthermore, by comparing the performances of the signaling and service data paths by simulations, configuration guidelines for the DVB-T2 physical layer signaling are given. The presented guidelines can prove useful when configuring DVB-T2 transmission networks. Finally, recommendations for the design of data and signalling paths are given based on findings from the case studies. The requirements for the signaling design should be derived from the requirements for the main services. Generally, these requirements for signaling should be more demanding as the signaling is the enabler for service reception.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
The aim of the present study was to assess the spectral behavior of the erector spinae muscle during isometric contractions performed before and after a dynamic manual load-lifting test carried out by the trunk in order to determine the capacity of muscle to perform this task. Nine healthy female students participated in the experiment. Their average age, height, and body mass (± SD) were 20 ± 1 years, 1.6 ± 0.03 m, and 53 ± 4 kg, respectively. The development of muscle fatigue was assessed by spectral analysis (median frequency) and root mean square with time. The test consisted of repeated bending movements from the trunk, starting from a 45º angle of flexion, with the application of approximately 15, 25 and 50% of maximum individual load, to the stand up position. The protocol used proved to be more reliable with loads exceeding 50% of the maximum for the identification of muscle fatigue by electromyography as a function of time. Most of the volunteers showed an increase in root mean square versus time on both the right (N = 7) and the left (N = 6) side, indicating a tendency to become fatigued. With respect to the changes in median frequency of the electromyographic signal, the loads used in this study had no significant effect on either the right or the left side of the erector spinae muscle at this frequency, suggesting that a higher amount and percentage of loads would produce more substantial results in the study of isotonic contractions.
Resumo:
In this article, we compare two strategies for atherosclerosis treatment: drugs and healthy lifestyle. Statins are the principal drugs used for the treatment of atherosclerosis. Several secondary prevention studies have demonstrated that statins can significantly reduce cardiovascular events including coronary death, the need for surgical revascularization, stroke, total mortality, as well as fatal and non-fatal myocardial infarction. These results were observed in both men and women, the elderly, smokers and non-smokers, diabetics and hypertensives. Primary prevention studies yielded similar results, although total mortality was not affected. Statins also induce atheroma regression and do not cause cancer. However, many unresolved issues remain, such as partial risk reduction, costs, several potential side effects, and long-term use by young patients. Statins act mainly as lipid-lowering drugs but pleiotropic actions are also present. Healthy lifestyle, on the other hand, is effective and inexpensive and has no harmful effects. Five items are associated with lower cardiac risk: non-smoking, BMI ≤25, regular exercise (30 min/day), healthy diet (fruits, vegetables, low-saturated fat, and 5-30 g alcohol/day). Nevertheless, there are difficulties in implementing these measures both at the individual and population levels. Changes in behavior require multidisciplinary care, including medical, nutritional, and psychological counseling. Participation of the entire society is required for such implementation, i.e., universities, schools, media, government, and medical societies. Although these efforts represent a major challenge, such a task must be faced in order to halt the atherosclerosis epidemic that threatens the world.
Power Electronic Converters in Low-Voltage Direct Current Distribution – Analysis and Implementation
Resumo:
Over the recent years, smart grids have received great public attention. Many proposed functionalities rely on power electronics, which play a key role in the smart grid, together with the communication network. However, “smartness” is not the driver that alone motivates the research towards distribution networks based on power electronics; the network vulnerability to natural hazards has resulted in tightening requirements for the supply security, set both by electricity end-users and authorities. Because of the favorable price development and advancements in the field, direct current (DC) distribution has become an attractive alternative for distribution networks. In this doctoral dissertation, power electronic converters for a low-voltage DC (LVDC) distribution system are investigated. These include the rectifier located at the beginning of the LVDC network and the customer-end inverter (CEI) on the customer premises. Rectifier topologies are introduced, and according to the LVDC system requirements, topologies are chosen for the analysis. Similarly, suitable CEI topologies are addressed and selected for study. Application of power electronics into electricity distribution poses some new challenges. Because the electricity end-user is supplied with the CEI, it is responsible for the end-user voltage quality, but it also has to be able to supply adequate current in all operating conditions, including a short-circuit, to ensure the electrical safety. Supplying short-circuit current with power electronics requires additional measures, and therefore, the short-circuit behavior is described and methods to overcome the high-current supply to the fault are proposed. Power electronic converters also produce common-mode (CM) and radio-frequency (RF) electromagnetic interferences (EMI), which are not present in AC distribution. Hence, their magnitudes are investigated. To enable comprehensive research on the LVDC distribution field, a research site was built into a public low-voltage distribution network. The implementation was a joint task by the LVDC research team of Lappeenranta University of Technology and a power company Suur-Savon S¨ahk¨o Oy. Now, the measurements could be conducted in an actual environment. This is important especially for the EMI studies. The main results of the work concern the short-circuit operation of the CEI and the EMI issues. The applicability of the power electronic converters to electricity distribution is demonstrated, and suggestions for future research are proposed.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.
Resumo:
In this thesis the process of building a software for transport accessibility analysis is described. The goal was to create a software which is easy to distribute and simple to use for the user without particular background in the field of the geographical data analysis. It was shown that existing tools do not suit for this particular task due to complex interface or significant rendering time. The goal was accomplished by applying modern approaches in the process of building web applications such as maps based on vector tiles, FLUX architecture design pattern and module bundling. It was discovered that vector tiles have considerable advantages over image-based tiles such as faster rendering and real-time styling.
Resumo:
Objective: Overuse injuries in violinists are a problem that has been primarily analyzed through the use of questionnaires. Simultaneous 3D motion analysis and EMG to measure muscle activity has been suggested as a quantitative technique to explore this problem by identifying movement patterns and muscular demands which may predispose violinists to overuse injuries. This multi-disciplinary analysis technique has, so far, had limited use in the music world. The purpose of this study was to use it to characterize the demands of a violin bowing task. Subjects: Twelve injury-free violinists volunteered for the study. The subjects were assigned to a novice or expert group based on playing experience, as determined by questionnaire. Design and Settings: Muscle activity and movement patterns were assessed while violinists played five bowing cycles (one bowing cycle = one down-bow + one up-bow) on each string (G, D, A, E), at a pulse of 4 beats per bow and 100 beats per minute. Measurements: An upper extremity model created using coordinate data from markers placed on the right acromion process, lateral epicondyle of the humerus and ulnar styloid was used to determine minimum and maximum joint angles, ranges of motion (ROM) and angular velocities at the shoulder and elbow of the bowing arm. Muscle activity in right anterior deltoid, biceps brachii and triceps brachii was assessed during maximal voluntary contractions (MVC) and during the playing task. Data were analysed for significant differences across the strings and between experience groups. Results: Elbow flexion/extension ROM was similar across strings for both groups. Shoulder flexion/extension ROM increaslarger for the experts. Angular velocity changes mirrored changes in ROM. Deltoid was the most active of the muscles assessed (20% MVC) and displayed a pattern of constant activation to maintain shoulder abduction. Biceps and triceps were less active (4 - 12% MVC) and showed a more periodic 'on and off pattern. Novices' muscle activity was higher in all cases. Experts' muscle activity showed a consistent pattern across strings, whereas the novices were more irregular. The agonist-antagonist roles of biceps and triceps during the bowing motion were clearly defined in the expert group, but not as apparent in the novice group. Conclusions: Bowing movement appears to be controlled by the shoulder rather than the elbow as shoulder ROM changed across strings while elbow ROM remained the same. Shoulder injuries are probably due to repetition as the muscle activity required for the movement is small. Experts require a smaller amount of muscle activity to perform the movement, possibly due to more efficient muscle activation patterns as a result of practice. This quantitative multidisciplinary approach to analysing violinists' movements can contribute to fuller understanding of both playing demands and injury mechanisms .
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
The present study investigates the usefulness of a multi-method approach to the measurement of reading motivation and achievement. A sample of 127 elementary and middle-school children aged 10 to 14 responded to measures of motivation, attributions, and achievement both longitudinally and in a challenging reading context. Novel measures of motivation and attributions were constructed, validated, and utilized to examine the relationship between ~ motivation, attributions, and achievement over a one-year period (Study I). The impact of classroom contexts and instructional practices was also explored through a study of the influence of topic interest and challenge on motivation, attributions, and persistence (Study II), as well as through interviews with children regarding motivation and reading in the classroom (Study III). Creation and validation of novel measures of motivation and attributions supported the use of a self-report measure of motivation in situation-specific contexts, and confirmed a three-factor structure of attributions for reading performance in both hypothetical and situation-specific contexts. A one-year follow up study of children's motivation and reading achievement demonstrated declines in all components of motivation beginning at age 10 through 12, and particularly strong decreases in motivation with the transition to middle school. Past perceived competence for reading predicted current achievement after controlling for past achievement, and showed the strongest relationships with reading-related skills in both elementary and middle school. Motivation and attributions were strongly related, and children with higher motivation Fulmer III displayed more adaptive attributions for reading success and failure. In the context of a developmentally inappropriate challenging reading task, children's motivation for reading, especially in terms of perceived competence, was threatened. However, interest in the story buffered some ofthe negative impacts of challenge, sustaining children's motivation, adaptive attributions, and reading persistence. Finally, children's responses during interviews outlined several emotions, perceptions, and aspects of reading tasks and contexts that influence reading motivation and achievement. Findings revealed that children with comparable motivation and achievement profiles respond in a similar way to particular reading situations, such as excessive challenge, but also that motivation is dynamic and individualistic and can change over time and across contexts. Overall, the present study outlines the importance of motivation and adaptive attributions for reading success, and the necessity of integrating various methodologies to study the dynamic construct of achievement motivation.
Resumo:
Research has shown a consistent correlation between efficacy and sport performance (Moritz, et aI., 2000). This relationship has been shown to be dynamic and reciprocal over seasons (e.g., Myers, Payment, et aI., 2004), within games (e.g., Butt, et aI., 2003), and across trials (e.g., Feltz, 1982). The purpose of the present study was to examine selfefficacy and performance simultaneously within one continuous routine. Forty-seven undergraduate students performed a gymnastic sequence while using an efficacy measure. Results indicated that the efficacy-performance relationship was not reciprocal; previous performance was a significant predictor of subsequent performance (p < .01; f3s ranged from .44 to .67). Results further revealed significant differences in efficacy beliefs between groups with high and low levels of performance [F (1,571) = 7.16,p < .01]. Findings suggest that high levels of performance within a continuous physical activity task result in higher performance scores and higher efficacy beliefs.
Resumo:
Previously, studies investigating emotional face perception - regardless of whether they involved adults or children - presented participants with static photos of faces in isolation. In the natural world, faces are rarely encountered in isolation. In the few studies that have presented faces in context, the perception of emotional facial expressions is altered when paired with an incongruent context. For both adults and 8- year-old children, reaction times increase and accuracy decreases when facial expressions are presented in an incongruent context depicting a similar emotion (e.g., sad face on a fear body) compared to when presented in a congruent context (e.g., sad face on a sad body; Meeren, van Heijnsbergen, & de Gelder, 2005; Mondloch, 2012). This effect is called a congruency effect and does not exist for dissimilar emotions (e.g., happy and sad; Mondloch, 2012). Two models characterize similarity between emotional expressions differently; the emotional seed model bases similarity on physical features, whereas the dimensional model bases similarity on underlying dimensions of valence an . arousal. Study 1 investigated the emergence of an adult-like pattern of congruency effects in pre-school aged children. Using a child-friendly sorting task, we identified the youngest age at which children could accurately sort isolated facial expressions and body postures and then measured whether an incongruent context disrupted the perception of emotional facial expressions. Six-year-old children showed congruency effects for sad/fear but 4-year-old children did not for sad/happy. This pattern of congruency effects is consistent with both models and indicates that an adult-like pattern exists at the youngest age children can reliably sort emotional expressions in isolation. In Study 2, we compared the two models to determine their predictive abilities. The two models make different predictions about the size of congruency effects for three emotions: sad, anger, and fear. The emotional seed model predicts larger congruency effects when sad is paired with either anger or fear compared to when anger and fear are paired with each other. The dimensional model predicts larger congruency effects when anger and fear are paired together compared to when either is paired with sad. In both a speeded and unspeeded task the results failed to support either model, but the pattern of results indicated fearful bodies have a special effect. Fearful bodies reduced accuracy, increased reaction times more than any other posture, and shifted the pattern of errors. To determine whether the results were specific to bodies, we ran the reverse task to determine if faces could disrupt the perception of body postures. This experiment did not produce congruency effects, meaning faces do not influence the perception of body postures. In the final experiment, participants performed a flanker task to determine whether the effect of fearful bodies was specific to faces or whether fearful bodies would also produce a larger effect in an unrelated task in which faces were absent. Reaction times did not differ across trials, meaning fearful bodies' large effect is specific to situations with faces. Collectively, these studies provide novel insights, both developmentally and theoretically, into how emotional faces are perceived in context.
Resumo:
Background: Routine screening of scoliosis is a controversial subject and screening efforts vary greatly around the world. METHODS: Consensus was sought among an international group of experts (seven spine surgeons and one clinical epidemiologist) using a modified Delphi approach. The consensus achieved was based on careful analysis of a recent critical review of the literature on scoliosis screening, performed using a conceptual framework of analysis focusing on five main dimensions: technical, clinical, program, cost and treatment effectiveness. FINDINGS: A consensus was obtained in all five dimensions of analysis, resulting in 10 statements and recommendations. In summary, there is scientific evidence to support the value of scoliosis screening with respect to technical efficacy, clinical, program and treatment effectiveness, but there insufficient evidence to make a statement with respect to cost effectiveness. Scoliosis screening should be aimed at identifying suspected cases of scoliosis that will be referred for diagnostic evaluation and confirmed, or ruled out, with a clinically significant scoliosis. The scoliometer is currently the best tool available for scoliosis screening and there is moderate evidence to recommend referral with values between 5 degrees and 7 degrees. There is moderate evidence that scoliosis screening allows for detection and referral of patients at an earlier stage of the clinical course, and there is low evidence suggesting that scoliosis patients detected by screening are less likely to need surgery than those who did not have screening. There is strong evidence to support treatment by bracing. INTERPRETATION: This information statement by an expert panel supports scoliosis screening in 4 of the 5 domains studied, using a framework of analysis which includes all of the World Health Organisation criteria for a valid screening procedure.
Resumo:
n this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.
Resumo:
In this thesis, the applications of the recurrence quantification analysis in metal cutting operation in a lathe, with specific objective to detect tool wear and chatter, are presented.This study is based on the discovery that process dynamics in a lathe is low dimensional chaotic. It implies that the machine dynamics is controllable using principles of chaos theory. This understanding is to revolutionize the feature extraction methodologies used in condition monitoring systems as conventional linear methods or models are incapable of capturing the critical and strange behaviors associated with the metal cutting process.As sensor based approaches provide an automated and cost effective way to monitor and control, an efficient feature extraction methodology based on nonlinear time series analysis is much more demanding. The task here is more complex when the information has to be deduced solely from sensor signals since traditional methods do not address the issue of how to treat noise present in real-world processes and its non-stationarity. In an effort to get over these two issues to the maximum possible, this thesis adopts the recurrence quantification analysis methodology in the study since this feature extraction technique is found to be robust against noise and stationarity in the signals.The work consists of two different sets of experiments in a lathe; set-I and set-2. The experiment, set-I, study the influence of tool wear on the RQA variables whereas the set-2 is carried out to identify the sensitive RQA variables to machine tool chatter followed by its validation in actual cutting. To obtain the bounds of the spectrum of the significant RQA variable values, in set-i, a fresh tool and a worn tool are used for cutting. The first part of the set-2 experiments uses a stepped shaft in order to create chatter at a known location. And the second part uses a conical section having a uniform taper along the axis for creating chatter to onset at some distance from the smaller end by gradually increasing the depth of cut while keeping the spindle speed and feed rate constant.The study concludes by revealing the dependence of certain RQA variables; percent determinism, percent recurrence and entropy, to tool wear and chatter unambiguously. The performances of the results establish this methodology to be viable for detection of tool wear and chatter in metal cutting operation in a lathe. The key reason is that the dynamics of the system under study have been nonlinear and the recurrence quantification analysis can characterize them adequately.This work establishes that principles and practice of machining can be considerably benefited and advanced from using nonlinear dynamics and chaos theory.