947 resultados para Level Independent Quasi-Birth-Death (LIQBD) Process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the latest seismic and geological data, tectonic subsidence of three seismic lines in the deepwater area of Pearl River Mouth Basin (PRMB), the northern South China Sea (SCS), is calculated. The result shows that the rifting process of study area is different from the typical passive continental margin basin. Although the seafloor spreading of SCS initiated at 32 Ma, the tectonic subsidence rate does not decrease but increases instead, and then decreases at about 23 Ma, which indicates that the rifting continued after the onset of seafloor spreading until about 23 Ma. The formation thickness exhibits the same phenomenon, that is the syn-rift stage prolonged and the post-rift thermal subsidence delayed. The formation mechanisms are supposed to be three: (1) the lithospheric rigidity of the northern SCS is weak and its ductility is relatively strong, which delayed the strain relaxation resulting from the seafloor spreading; (2) the differential layered independent extension of the lithosphere may be one reason for the delay of post-rift stage; and (3) the southward transition of SCS spreading ridge during 24 to 21 Ma and the corresponding acceleration of seafloor spreading rate then triggered the initiation of large-scale thermal subsidence in the study area at about 23 Ma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To study the relationship between sediment transportation and saltwater intrusion in the Changjiang (Yangtze) estuary, a three-dimensional numerical model for temperature, salinity, velocity field, and suspended sediment concentration was established based on the ECOMSED model. Using this model, sediment transportation in the flood season of 2005 was simulated for the Changjiang estuary. A comparison between simulated results and observation data for the tidal level, flow velocity and direction, salinity and suspended sediment concentration indicated that they were consistent in overall. Based on model verification, the simulation of saltwater intrusion and its effect on sediment in the Changjiang estuary was analyzed in detail. The saltwater intrusion in the estuary including the formation, evolution, and disappearance of saltwater wedge and the induced vertical circulation were reproduced, and the crucial impact of the wedge on cohesive and non-cohesive suspended sediment distribution and transportation were successfully simulated. The result shows that near the salinity front, the simulated concentrations of both cohesive and non-cohesive suspended sediment at the surface layer had a strong relationship with the simulated velocity, especially when considering a 1-hour lag. However, in the bottom layer, there was no obvious correlation between them, because the saltwater wedge and its inducing vertical circulation may have resuspended loose sediment on the bed, thus forming a high-concentration area near the bottom even if the velocity near the bottom was very low during the transition phase from flood to ebb.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

为降低沉积过程的热应力,抑制成形过程中裂缝的产生,研究基板预热对激光金属沉积成形(Laser metal deposition shaping,LMDS)过程热应力的影响具有非常重要的意义。根据有限元分析中的"单元生死"思想,利用APDL(ANSYS parametric design language)编程建立多道多层激光金属沉积成形过程的数值模拟模型,深入探讨基板未预热和预热到400℃时对成形过程热应力的影响。计算结果表明,基板预热到400℃可以显著降低成形过程中试样的热应力变化波动性,试样的Von Mises热应力最大值可降低10%左右,其中x方向热应力最大值可降低8.5%左右,z方向热应力最大值可降低8.1%左右。在与模拟过程相同的条件下,利用自行研制的激光金属沉积成形设备进行了成形试验,成形试验的结果与模拟结果基本吻合。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formation of civilization, one of great marks in the history of human's society development, has been remained one of the hottest topics in the world. Many theories have been put ford to explain its causes and mechanisms. Although more attentions have been paid to its development, the role of environmental change should not be ignored. In this paper, the level of ancient farming productivity was analyzed, the mechanisms and the process of Chinese ancient civilization formation was explored, and some causes why Chinese ancient civilization shows many different features from other 5 ancient civilizations of the world was analyzed. The main results and conclusions are presented as followed. 1. Compared with the productivity level of other five ancient civilizations, the productivity of ancient China characterized by a feature of extensive not intensive cultivation was lower than that of other five ancient civilizations whose agriculture were based on irrigation. 2. The 5 5000 a B.P. cold event may have facilitated the formation of Egypt and Mesopotamian ancient civilizations and also have had an influence on the development of Neolithic culture in China. 3. The 4 000 a B.P. cold event, which may be the coldest period since the Younger Dryas cold event and signifies the changes from the early Holocene Climate Optimum to late Holocene in many regions of the world, resulted in the great migration of the Indo-European peoples from north Europe to other part of the World and the collapses of ancient civilizations in Egypt, Indus and the Mesopotamian and the collapse of five Neolithic cultures around central China. More important than that is the emergence of Chinese civilization during the same period. Many theories have been put ford to explain why it was in Zhongyuan area not other places whose Neolithic cultures seem more advanced that gave rise to civilization. For now no theory could explain it satisfiedly. Archaeological evidence clearly demonstrate that war was prevailed the whole China especially during the late Longshan culture period, so it seemed war has played a very important role in the emergence of China ancient civilization. Carneiro sees two conditions as essential to the formation of complex societies in concert with warfare, i.e. population growth and environmental circumscription. It was generally through that China couldn't evolved into the environmental circumscription and population pressure because China has extensive areas to live, but that depends on situations. The environmental circumscription area was formed due to the 4000a B.P. cold event and companied flooding disasters, while the population pressure is formed due to three factors; 1) population grow rapidly because of the suitable environment provided by the Holocene Optimum and thus laid its foundations for the ancient human population; 2) population pressure is also related to the primitive agricultural level characterized by extensive not intensive cultivation; 3) population pressure was mainly related to the great migrations of people to the same areas; 4) population pressure was also related to productivity decrease due to the 4 000a B.P. cold event. 4. When population pressure is formed, war is the most possible way to solve the intensions between population and the limited cultivated land and then resulted in the formation of civilization. In this way the climate change during the 4 000a B.P. cold event may have facilitated the emergence of Chinese ancient civilization. Their detailed relations could also be further understood in this way: The first birth places of China ancient civilization could be in Changjiang areas or (and) Daihai area, Shandong province rather than in central China and the emergence time of ancient civilization formed in central China should be delayed if the 4 000a B.P. cold event and companied flooding disasters didn't occurred.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon is an essential element for life, food and energy. It is also a key element in the greenhouse gases and therefore plays a vital role in climatic changes. The rapid increase in atmospheric concentration of CO_2 over the past 150 years, reaching current concentrations of about 370 ppmv, corresponds with combustion of fossii fuels since the beginning of the industrial age. Conversion of forested land to agricultural use has also redistributed carbon from plants and soils to the atmosphere. These human activities have significantly altered the global carbon cycle. Understanding the consequences of these activities in the coming decades is critical for formulating economic, energy, technology, trade, and security policies that will affect civilization for generations. Under the auspices of the International Geosphere-Biosphere Programme (IGBP), several large international scientific efforts are focused on elucidating the various aspects of the global carbon cycle of the past decade. It is only possible to balance the global carbon cycle for the 1990s if there is net carbon uptake by terrestrial ecosystems of around 2 Pg C/a. There are now some independent, direct evidences for the existence of such a sink. Policymarkers involved in the UN Framework Convention on Climate Change (UN-FCCC) are striving to reach consensuses on a 'safe path' for future emissions, the credible predictions on where and how long the terrestrial sink will either persist at its current level, or grow/decline in the future, are important to advice the policy process. The changes of terrestrial carbon storage depend not only on human activities, but also on biogeochemical and climatological processes and their interaction with the carbon cycles. In this thesis, the climate-induced changes and human-induced changes of carbon storage in China since the past 20,000 years are examined. Based on the data of the soil profiles investigated during China's Second National Soil Survey (1979-1989), the forest biomass measured during China's Fourth National Forest Resource Inventory (1989-1993), the grass biomass investigated during the First National Grassland Resource Survey (1980-1991), and the data collected from a collection of published literatures, the current terrestrial carbon storage in China is estimated to -144.1 Pg C, including -136.8 Pg C in soil and -7.3 Pg C in vegetation. The soil organic (SOC) and inorganic carbon (SIC) storage are -78.2 Pg C and -58.6 Pg C, respectively. In the vegetation reservoir, the forest carbon storage is -5.3 Pg C, and the other of-1.4 Pg C is in the grassland. Under the natural conditions, the SOC, SIC, forest and grassland carbon storage are -85.3 Pg C, -62.6 Pg C, -24.5 Pg C and -5.3 Pg C, respectively. Thus, -29.6 Pg C organic carbon has been lost due to land use with a decrease of -20.6%. At the same time, the SIC storage also has been decreased by -4.0 Pg C (-6.4%). These suggest that human activity has caused significant carbon loss in terrestrial carbon storage of China, especially in the forest ecosystem (-76% loss). Using the Paleocarbon Model (PCM) developed by Wu et al. in this paper, total terrestrial organic carbon storage in China in the Last Glacial Maximum (LGM) was -114.8 Pg C, including -23.1 Pg C in vegetation and -86.7 Pg C in soil. At the Middle Holocene (MH), the vegetation, soil and total carbon were -37.3 Pg C, -93.9 Pg C and -136.0 Pg C, respectively. This implies a gain of-21.2 Pg C in the terrestrial carbon storage from LGM to HM mainly due to the temperature increase. However, a loss of-14.4 Pg C of terrestrial organic carbon occurred in China under the current condition (before 1850) compared with the MH time, mainly due to the precipitation decrease associated with the weakening of the Asian summer monsoon. These results also suggest that the terrestrial ecosystem in China has a substantial potential in the restoration of carbon storage. This might be expected to provide an efficient way to mitigate the greenhouse warming through land management practices. Assuming that half of the carbon loss in the degraded terrestrial ecosystem in current forest and grass areas are restored during the next 50 years or so, the terrestrial ecosystem in China may sequestrate -12.0 Pg of organic carbon from the atmosphere, which represents a considerable offset to the industry's CO2 emission. If the ' Anthropocene' Era will be another climate optimum like MH due to the greenhouse effect, the sequestration would be increased again by -4.3 - 9.0 Pg C in China.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The receiver function method applied in researching the discontinuities in upper mantle was systematically studied in this paper. Using the theoretical receiver functions, the characteristics of P410S and P660S phases were analyzed, and the influencing factors for detection of these phases were discussed. The stability of receiver function was studied, and a new computational method of receiver function, RFSSMS (Receiver Function of Stack and Smooth of Multi seismic-records at a Single station), was put forward. We built initial reference velocity model for the media beneath each of 18 seismic stations respectively; then estimated the buried depths of 410-km and 660-km discontinuities(simply marked as '410' and '660') under the stations by using the arrive time differences of P410S and P660S with P. We developed a new receiver function inversion method -PGARFI (Peeling-Genetic Algorithm of Receiver Function Inversion), to obtain the whole crust and upper mantle velocity structure and the depths of discontinuities beneath a station. The major works and results could be summarized as follows: (1) By analysis of the theoretical receiver functions with different velocity models and different ray parameters, we obtain the knowledge: The amplitudes of P410S and P660S phases are decreasing with the increasing of epicentral distance A , and the arrival time differences of these phases with P are shorter as A is longer. The multiple refracted and/or reflected waves yielded on Moho and the discontinuities in the crust interfere the identification of P410S. If existing LVZ under the lithosphere, some multiple waves caused by LVZ will interfere the identification of P410S. The multiple waves produced by discontinuity lied near 120km depth will mix with P410s phase in some range of epicentral distance; and the multiple waves concerned with the discontinuity lied near 210km depth will interfere the identification of P660S. The epicentral distance for P4i0s identification is limited, the upper limit is 80° . The identification of P660S is not restricted by the epicenter distance obviously. The identification of P410S and P6gos in the theoretical receiver functions is interfered weakly from the seismic wave attenuation caused by the media absorption if the Q value in a reasonable range. (2) The stability of receiver function was studied by using synthetic seismograms with different kind of noise. The results show that on the condition of high signal-noise-ratio of seismic records, the high frequency background noise and the low frequency microseism noise do not influence the calculating result of receiver function. But the media "scattering noise" influence the stability of receiver function. When the scattering effect reach some level, the identification of P4iOs and P66os is difficult in single receiver function which is yielded from only one seismic record. We provided a new method to calculate receiver function, that is, with a group of earthquake records, stacking the R and Z components respectively in the frequency domain, and weighted smooth the stacked Z component, then compute the complex spectrum ratio of R to Z. This method can improve the stability of receiver function and protrude the P4i0s and P66os in the receiver function curves. (3) 263 receiver functions were provided from 1364 three component broadband seismograms recorded at 18 stations in China and adjacent areas for the tele-earthquakes. The observed arrival time differences of P410S and P660S with P were obtained in these receiver functions. The initial velocity model for every station was built according to the prior research results. The buried depths of '410' and '660' under a station were acquired by the way of adjusting the depths of these two discontinuities in the initial velocity model until the theoretical arrival time differences of P410S and P660S with P well conformed to the observed. The results show an obvious lateral heterogeneity of buried depths of ' 410' and (660' . The depth of '410' is shallower beneath BJI, XAN, LZH and ENH, but deeper under QIZ and CHTO, and the average is 403km . The average depth of '660' is 663km, deeper under MDJ and MAJO, but shallower under QIZ and HYB. (4) For inversing the whole crust and upper mantle velocity structure, a new inversion method -PGARFI (Peeling-Genetic Algorithm of Receiver Function Inversion) has- been developed here. The media beneath a station is divided into segments, then the velocity structure is inversed from receiver function from surface to deep successively. Using PGARFI, the multi reflection / refraction phases of shallower discontinuities are isolated from the first order refraction transform phase of deep discontinuity. The genetic algorithm with floating-point coding was used hi the inversion of every segment, and arithmetical crossover and non-uniform mutation technologies were employed in the genetic optimization. 10 independent inversions are completed for every segment, and 50 most excellent velocity models are selected according to the priority of fitness from all models produced in the inversion process. The final velocity structure of every segment is obtained from the weighted average of these 50 models. Before inversion, a wide range of velocity variation with depth and depth range of the main discontinuities are given according to priori knowledge. PGARFI was verified with numerical test and applied in the inversion of the velocity structure beneath HIA station down to 700km depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research addresses the problems of public policy-making procedures. In conducting our research, we considered public policy as the allocation or reallocation of interests or resources among different members of the public. Due to limited resources, administrations should trade off all interests among different segments of society when formulating a policy. Unfortunately, in recent years there have been several mass conflicts with administration of public policy. This infers that some people’s interests were ignored or harmed by certain policies. According to the theory of procedural justice, people may accept the unexpected result if they consider the procedure is just. This research hypothesizes that there are certain problems in current policy-making procedures and that improving these procedures may make policies more acceptable. A pilot study was conducted by interviewing ten scholars from a range of disciplines. The interview record transcripts were coded by three analysts. The results indicate that: 1) Most of the scholars criticized current public policies as lacking sensitivity to public issues; 2) Most of them considered that current public policies do not resolve problems effectively; and 3) They all considered that psychology research may enhance awareness of public issues and improve the effectiveness of policy. In study 2, the procedure of public policy was tracked and compared with a social survey. The Beijing government would like to increase the taxi fare rate to cope with the rising price of petroleum. Although the majority of delegates in a hearing of witnesses supported the policy consideration, the social survey of 186 residents and 63 taxi drivers indicated that both of them oppose the consideration. The findings indicate that the hearing of witnesses was not able to delegate the opinions of the public, resulting in the policy failing to resolve the problem. Study 3 was a nonequivalent control group quasi-experiment. Visitors of two Internet Website were chosen as subjects for original photo games. For the experiment group, visitors were invited to express their desires and suggestions on the game rules for one week, and then declare rules referencing the suggestions before starting the game. Meanwhile, the control group simply declared the rules at the beginning of the game. Compared with the two games during 23 days, the experiment group submitted more photos than the control group. The results of this research imply that, the good will of policy makers is not enough to make a policy effective. Surveys on public attitudes at the beginning of the policy-making process can allow policy makers to better determine public issues, assess the tradeoff of public interests, help ensure policies are more acceptable, and help foster a harmonious society. The authors of this research suggest that psychology research should take more social level problems into account in the policy-making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the 19th century, people have long believed that the function of cerebellum was restricted to fine motor control and modulation. In the past two decades, however, more and more studies challenged this traditional view. While the neuroanatomy of the cerebellum from cellular to system level has been well documented, the functions of this neural organ remain poorly understood. This study, including three experiments, attempted to further the understanding of cerebellar functions from different viewpoints. Experiment One used the parametric design to control motor effects. The activation in cerebellum was found to be associated with the difficulty levels of a semantic discrimination task, suggesting the involvement of the cerebellum in higher level of language functions. Moreover, activation of the right posterior cerebellum was found to co-vary with that of the frontal cortex. Experiment Two adopted the cue-go paradigm and event-related design to exclude the effects of phonological and semantic factors in a mental writing task. The results showed that bilateral anterior cerebellum and cerebral motor regions were significantly activated during the task and the hemodynamic response of the cerebellum was similar to those of the cerebral motor cortex. These results suggest that the cerebellum participates in motor imagination during orthographic output. Experiment Three investigated the learning process of a verb generation task. While both lateral and vermis cerebellum were found to be activation in the task, each was correlated a separate set of frontal regions. More importantly, activations both in the cerebellum and frontal cortex decreased with the repetition of the task. These results indicate that the cerebellum and frontal cortex is jointly engaged in some functions; each serves as a part of a single functional system. Taken these findings together, the following conclusions can be drawn: 1.The cerebellum is not only involved in functions related to speech or articulation, but also participates in the higher cognitive functions of language. 2.The cerebellum participates in various functions by supporting the corresponding regions in cerebral cortex, but not directly executes the functions as an independent module. 3.The anterior part of cerebellum is related to motor functions, whereas the posterior part is involved in cognitive functions. 4.While the motor functions rely on the engagement of both sides of the cerebellar hemispheres, the higher cognitive functions mainly depend on the right cerebellum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of functional neuroimaging studies with skilled readers consistently showed activation to visual words in the left mid-fusiform cortex in occipitotemporal sulcus (LMFC-OTS). Neuropsychological studies also showed that lesions at left ventral occipitotemporal areas result in impairment in visual word processing. Based on these empirical observations and some theoretical speculations, a few researchers postulated that the LMFC-OTS is responsible for instant parallel and holistic extraction of the abstract representation of letter strings, and labeled this piece of cortex as “visual word form area” (VWFA). Nonetheless, functional neuroimaging studies alone is basically a correlative rather than causal approach, and lesions in the previous studies were typically not constrained within LMFC-OTS but also involving other brain regions beyond this area. Given these limitations, it remains unanswered for three fundamental questions: is LMFC-OTS necessary for visual word processing? is this functionally selective for visual word processing while unnecessary for processing of non-visual word stimuli? what are its function properties in visual word processing? This thesis aimed to address these questions through a series of neuropsychological, anatomical and functional MRI experiments in four patients with different degrees of impairments in the left fusiform gyrus. Necessity: Detailed analysis of anatomical brain images revealed that the four patients had differential foci of brain infarction. Specifically, the LMFC-OTS was damaged in one patient, while it remained intact in the other three. Neuropsychological experiments showed that the patient with lesions in the LMFC-OTS had severe impairments in reading aloud and recognizing Chinese characters, i.e., pure alexia. The patient with intact LMFC-OTS but information from the left visual field (LVF) was blocked due to lesions in the splenium of corpus callosum, showed impairment in Chinese characters recognition when the stimuli were presented in the LVF but not in the RVF, i.e. left hemialexia. In contrast, the other two patients with intact LMFC-OTS had normal function in processing Chinese characters. The fMRI experiments demonstrated that there was no significant activation to Chinese characters in the LMFC-OTS of the pure alexic patient and of the patient with left hemialexia when the stimuli were presented in the LVF. On the other hand, this patient, when Chinese characters were presented in right visual field, and the other two with intact LMFC-OTS had activation in the LMFC-OTS. These results together point to the necessity of the LMFC-OTS for Chinese character processing. Selectivity: We tested selectivity of the LMFC-OTS for visual word processing through systematically examining the patients’ ability for processing visual vs. auditory words, and word vs. non-word visual stimuli, such as faces, objects and colors. Results showed that the pure alexic patients could normally process auditory words (expression, understanding and repetition of orally presented words) and non-word visual stimuli (faces, objects, colors and numbers). Although the patient showed some impairments in naming faces, objects and colors, his performance scores were only slightly lower or not significantly different relative to those of the patients with intact LMFC-OTS. These data provide compelling evidence that the LMFC-OTS is not requisite for processing non-visual word stimuli, thus has selectivity for visual word processing. Functional properties: With tasks involving multiple levels and aspects of word processing, including Chinese character reading, phonological judgment, semantic judgment, identity judgment of abstract visual word representation, lexical decision, perceptual judgment of visual word appearance, and dictation, copying, voluntary writing, etc., we attempted to reveal the most critical dysfunction caused by damage in the LMFC-OTS, thus to clarify the most essential function of this region. Results showed that in addition to dysfunctions in Chinese character reading, phonological and semantic judgment, the patient with lesions at LMFC-OTS failed to judge correctly whether two characters (including compound and simple characters) with different surface features (e.g., different fonts, printed vs. handwritten vs. calligraphy styles, simplified characters vs. traditional characters, different orientations of strokes or whole characters) had the same abstract representation. The patient initially showed severe impairments in processing both simple characters and compound characters. He could only copy a compound character in a stroke-by-stroke manner, but not by character-by-character or even by radical-by-radical manners. During the recovery process, namely five months later, the patient could complete the abstract representation tasks of simple characters, but showed no improvement for compound characters. However, he then could copy compound characters in a radical-by-radical manner. Furthermore, it seems that the recovery of copying paralleled to that of judgment of abstract representation. These observations indicate that lesions of the LMFC-OTS in the pure alexic patients caused several damage in the ability of extracting the abstract representation from lower level units to higher level units, and the patient had especial difficulty to extract the abstract representation of whole character from its secondary units (e.g., radicals or single characters) and this ability was resistant to recover from impairment. Therefore, the LMFC-OTS appears to be responsible for the multilevel (particularly higher levels) abstract representations of visual word form. Successful extraction seems independent on access to phonological and semantic information, given the alexic patient showed severe impairments in reading aloud and semantic processing on simple characters while maintenance of intact judgment on their abstract representation. However, it is also possible that the interaction between the abstract representation and its related information e.g. phonological and semantic information was damaged as well in this patient. Taken together, we conclude that: 1) the LMFC-OTS is necessary for Chinese character processing, 2) it is selective for Chinese character processing, and 3) its critical function is to extract multiple levels of abstract representation of visual word and possibly to transmit it to phonological and semantic systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transfer of learning is one of the major concepts in educational psychology. As cognitive psychology develops, many researchers have found that transfer plays an important part in problem solving, and the awareness of the similarity of related problems is important in transfer. So they become more interested in researching the problem of transfer. But in the literature of transfer research, it has been found that many researchers do not hold identical conclusions about the influence of awareness of related problems during problem solving transfer. This dissertation is written on the basic of much of sub-research work, such as looking up literature concerning transfer of problem solving research, comparing the results of research work done recently and experimental researches. The author of this dissertation takes middle school students as subjects, geometry as materials, and adopts factorial design in his experiments. The influence of awareness of related problems on problem solving transfer is examined from three dimensions which are the degree of difficulty of transfer problems, the level of awareness of related problems and the characteristics of subjects themselves. Five conclusions have been made after the experimental research: (1) During the process of geometry problem solving, the level of awareness of related problems is one of the major factors that influence the effect of problem solving transfer. (2) Either more difficult or more easy of the transfer problems will hinder the influence of awareness of related problems during problem solving transfer, and the degree of difficulty of the transfer problems have interactions with the level of awareness of related problems in affecting transfer. (3) During geometry problems solving transfer, the level of awareness of related problems has interactions with the degree of student achievement. Compared with the students who have lower achievement, the influence of the level of the awareness is bigger in the students who have higher achievement. (4) There is positive correlation between geometry achievement and reasoning ability of the middle school students. The student who has higher reasoning ability has higher geometry achievement, while the level of awareness is raised, the transfer achievement of both can be raised significantly. (5) There is positive correlation between geometry achievement and cognitive style of the middle school students. The student who has independent field tendency of cognitive style has higher geometry achievement, while the level of awareness is raised, the transfer achievement of both can be raised significantly. At the end of the dissertation, the researcher offers two proposals concerning Geometry teaching on the basis of the research findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project investigates the computational representation of differentiable manifolds, with the primary goal of solving partial differential equations using multiple coordinate systems on general n- dimensional spaces. In the process, this abstraction is used to perform accurate integrations of ordinary differential equations using multiple coordinate systems. In the case of linear partial differential equations, however, unexpected difficulties arise even with the simplest equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of process variables (pea starch, guar gum and glycerol) on the viscosity (V), solubility (SOL), moisture content (MC), transparency (TR), Hunter parameters (L, a, and b), total color difference (ΔE), yellowness index (YI), and whiteness index (WI) of the pea starch based edible films was studied using three factors with three level Box–Behnken response surface design. The individual linear effect of pea starch, guar and glycerol was significant (p < 0.05) on all the responses. However, a value was only significantly (p < 0.05) affected by pea starch and guar gum in a positive and negative linear term, respectively. The effect of interaction of starch × glycerol was also significant (p < 0.05) on TR of edible films. Interaction between independent variables starch × guar gum had a significant impact on the b and YI values. The quadratic regression coefficient of pea starch showed a significant effect (p < 0.05) on V, MC, L, b, ΔE, YI, and WI; glycerol level on ΔE and WI; and guar gum on ΔE and SOL value. The results were analyzed by Pareto analysis of variance (ANOVA) and the second order polynomial models were developed from the experimental design with reliable and satisfactory fit with the corresponding experimental data and high coefficient of determination (R2) values (>0.93). Three-dimensional response surface plots were established to investigate the relationship between process variables and the responses. The optimized conditions with the goal of maximizing TR and minimizing SOL, YI and MC were 2.5 g pea starch, 25% glycerol and 0.3 g guar gum. Results revealed that pea starch/guar gum edible films with appropriate physical and optical characteristics can be effectively produced and successfully applied in the food packaging industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To Augustyn Surdyk numerous assumptions of constructivism and constructionism in the educational context seem to correspond with the idea of autonomisation in foreign language didactics. He presents a comparison of selected aspects of the three theories in question on the example of an innovative communicative technique of Role-Playing Games applied in the process of teaching foreign languages at an advanced level. The conventions of the technique with its simplified rules have been borrowed from popular parlour games and adapted by the author to the conditions of language didactics. The elements of play and simulation incorporated in the technique allow it to be rated among techniques of ludic strategy. (from Preface to the book)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extensible systems allow services to be configured and deployed for the specific needs of individual applications. This paper describes a safe and efficient method for user-level extensibility that requires only minimal changes to the kernel. A sandboxing technique is described that supports multiple logical protection domains within the same address space at user-level. This approach allows applications to register sandboxed code with the system, that may be executed in the context of any process. Our approach differs from other implementations that require special hardware support, such as segmentation or tagged translation look-aside buffers (TLBs), to either implement multiple protection domains in a single address space, or to support fast switching between address spaces. Likewise, we do not require the entire system to be written in a type-safe language, to provide fine-grained protection domains. Instead, our user-level sandboxing technique requires only paged-based virtual memory support, and the requirement that extension code is written either in a type-safe language, or by a trusted source. Using a fast method of upcalls, we show how our sandboxing technique for implementing logical protection domains provides significant performance improvements over traditional methods of invoking user-level services. Experimental results show our approach to be an efficient method for extensibility, with inter-protection domain communication costs close to those of hardware-based solutions leveraging segmentation.