993 resultados para Nature study.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Econometrics is a young science. It developed during the twentieth century in the mid-1930’s, primarily after the World War II. Econometrics is the unification of statistical analysis, economic theory and mathematics. The history of econometrics can be traced to the use of statistical and mathematics analysis in economics. The most prominent contributions during the initial period can be seen in the works of Tinbergen and Frisch, and also that of Haavelmo in the 1940's through the mid 1950's. Right from the rudimentary application of statistics to economic data, like the use of laws of error through the development of least squares by Legendre, Laplace, and Gauss, the discipline of econometrics has later on witnessed the applied works done by Edge worth and Mitchell. A very significant mile stone in its evolution has been the work of Tinbergen, Frisch, and Haavelmo in their development of multiple regression and correlation analysis. They used these techniques to test different economic theories using time series data. In spite of the fact that some predictions based on econometric methodology might have gone wrong, the sound scientific nature of the discipline cannot be ignored by anyone. This is reflected in the economic rationale underlying any econometric model, statistical and mathematical reasoning for the various inferences drawn etc. The relevance of econometrics as an academic discipline assumes high significance in the above context. Because of the inter-disciplinary nature of econometrics (which is a unification of Economics, Statistics and Mathematics), the subject can be taught at all these broad areas, not-withstanding the fact that most often Economics students alone are offered this subject as those of other disciplines might not have adequate Economics background to understand the subject. In fact, even for technical courses (like Engineering), business management courses (like MBA), professional accountancy courses etc. econometrics is quite relevant. More relevant is the case of research students of various social sciences, commerce and management. In the ongoing scenario of globalization and economic deregulation, there is the need to give added thrust to the academic discipline of econometrics in higher education, across various social science streams, commerce, management, professional accountancy etc. Accordingly, the analytical ability of the students can be sharpened and their ability to look into the socio-economic problems with a mathematical approach can be improved, and enabling them to derive scientific inferences and solutions to such problems. The utmost significance of hands-own practical training on the use of computer-based econometric packages, especially at the post-graduate and research levels need to be pointed out here. Mere learning of the econometric methodology or the underlying theories alone would not have much practical utility for the students in their future career, whether in academics, industry, or in practice This paper seeks to trace the historical development of econometrics and study the current status of econometrics as an academic discipline in higher education. Besides, the paper looks into the problems faced by the teachers in teaching econometrics, and those of students in learning the subject including effective application of the methodology in real life situations. Accordingly, the paper offers some meaningful suggestions for effective teaching of econometrics in higher education

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft clays known for their high compressibility, low stiffness and low shear strength are always associated with large settlement. In place soil treatment using calcium-based stabilizers like lime and cement is a feasible solution to readdress strength deficiencies and problematic shrink/swell behaviour of unstable subgrade soils. Out of these, lime has been proved unambiguously as the most effective and economical stabilising agent for marine clays. Lime stabilisation creates long-term chemical changes in unstable clay soils to create strong, but flexible, permanent structural layers in foundations and other pavement systems. Even though calcium-based stabilizers can improve engineering properties of soft clays, problems can arise when they are used in soils rich in sulphates. It is possible for marine clays to be enriched with sulphates, either by nature or due to the discharge of nearby industrial wastes containing sulphates. The presence of sulphates is reported to adversely affect the cation exchange and pozzolanic reactions of cement and lime treated soil systems. The anions of sulphates may combine with the available calcium and alumina, and form insoluble ettringite in the soil system. Literature on sulphate attack in lime treated marine clays reports that formation of ettringite in lime-sodium sulphate-clay system is capable of adversely affecting the engineering behavior of marine clays. Only very few studies have been conducted on soft marine clays found along the coastal belt of Kerala and that too, is limited to Cochin marine clays. The studies conducted also have the limitation that the strength behaviour of lime stabilised clay was investigated only for one year. Practically no data pertaining to long term adverse effects likely to be brought about by sulphates on the strength and compressibility characteristics of Cochin marine clays is available. The overriding goal of this investigation was thus to examine the effectiveness of lime stabilisation in Cochin marine clays under varying sulphate contents. The study aims to reveal the changes brought about by varying sulphate contents on both physical and engineering properties of these clays stabilised by lime and the results for various curing periods up to two years is presented in this thesis. Quite often the load causing an unacceptable settlement may be less than the load required to cause shear failure and therefore attempt has been made in this research to highlight sulphate induced changes in both the compressibility and strength characteristics of lime treated Cochin marine clays. The study also aimed at comparing the available IS methods for sulphate quantification and has attempted to determine the threshold level of sulphate likely make these clays vulnerable by lime stabilisation. Clays used in this study were obtained from two different sites in Kochi and contained sulphate in two different concentrations viz., 0.5% and 0.1%. Two different lime percentages were tried out, 3% and 6%. Sulphate content was varied from 1% to 4% by addition of reagent grade sodium sulphate. The long term influence of naturally present sulphate is also investigated. X-ray diffraction studies and SEM studies have been undertaken to understand how the soil-lime reactions are affected in the presence of sodium sulphate. Natural sulphate content of 0.1% did not seem to have influenced normal soil lime reactions but 0.5% sulphate could induce significant changes adversely in both compressibility and strength behaviour of lime treated clays after long duration. Compressibility is seen to increase drastically with increasing sulphate content suggesting formation of ettringite on curing for longer periods. Increase in compression index and decrease in bond strength with curing period underlined the adverse effects induced in lime treated marine clays by the presence of sulphates. Presence of sulphate in concentrations ranging from 0.5 % to 4% is capable of adversely affecting the strength of lime treated marine clays. Considerable decrease is observed with increasing concentrations of sulphate. Ettringite formation due to domination of sodium ions in the system was confirmed in mineralogical studies made. Barium chloride and barium hydroxide is capable of bringing about beneficial changes both in compressibility and strength characteristics of lime treated Cochin marine clays in the presence of varying concentrations of sulphate and is strongly influenced by curing time. Clay containing sodium sulphate has increased strength values when either of barium compounds was used with lime ascompared with specimens treated with lime only. Barium hydroxide is observed to remarkably increase the strength as compared to barium chloride,when used in conjunction with lime to counteract the effect of sulphate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of self-assembly as a strategy for the synthesis has been confined largely to molecules, because of the importance of manipulating the structure of matter at the molecular scale. We have investigated the influence of temperature and pH, in addition to the concentration of the capping agent used for the formation of the nano-bio conjugates. For example, the formation of the narrower size distribution of the nanoparticles was observed with the increase in the concentration of the protein, which supports the fact that γ-globulin acts both as a controller of nucleation as well as stabiliser. As analyzed through various photophysical, biophysical and microscopic techniques such as TEM, AFM, C-AFM, SEM, DLS, OPM, CD and FTIR, we observed that the initial photoactivation of γ-globulin at pH 12 for 3 h resulted in small protein fibres of ca. Further irradiation for 24 h, led to the formation of selfassembled long fibres of the protein of ca. 5-6 nm and observation of surface plasmon resonance band at around 520 nm with the concomitant quenching of luminescence intensity at 680 nm. The observation of light triggered self-assembly of the protein and its effect on controlling the fate of the anchored nanoparticles can be compared with the naturally occurring process such as photomorphogenesis.Furthermore,our approach offers a way to understand the role played by the self-assembly of the protein in ordering and knock out of the metal nanoparticles and also in the design of nano-biohybrid materials for medicinal and optoelectronic applications. Investigation of the potential applications of NIR absorbing and water soluble squaraine dyes 1-3 for protein labeling and anti-amyloid agents forms the subject matter of the third chapter of the thesis. The study of their interactions with various proteins revealed that 1-3 showed unique interactions towards serum albumins as well as lysozyme. 69%, 71% and 49% in the absorption spectra as well as significant quenching in the fluorescence intensity of the dyes 1-3, respectively. Half-reciprocal analysis of the absorption data and isothermal titration calorimetric (ITC) analysis of the titration experiments gave a 1:1 stoichiometry for the complexes formed between the lysozyme and squaraine dyes with association constants (Kass) in the range 104-105 M-1. We have determined the changes in the free energy (ΔG) for the complex formation and the values are found to be -30.78, -32.31 and -28.58 kJmol-1, respectively for the dyes 1, 2 and 3. Furthermore, we have observed a strong induced CD (ICD) signal corresponding to the squaraine chromophore in the case of the halogenated squaraine dyes 2 and 3 at 636 and 637 nm confirming the complex formation in these cases. To understand the nature of interaction of the squaraine dyes 1-3 with lysozyme, we have investigated the interaction of dyes 1-3 with different amino acids. These results indicated that the dyes 1-3 showed significant interactions with cysteine and glutamic acid which are present in the side chains of lysozyme. In addition the temperature dependent studies have revealed that the interaction of the dye and the lysozyme are irreversible. Furthermore, we have investigated the interactions of these NIR dyes 1-3 with β- amyloid fibres derived from lysozyme to evaluate their potential as inhibitors of this biologically important protein aggregation. These β-amyloid fibrils were insoluble protein aggregates that have been associated with a range of neurodegenerative diseases, including Huntington, Alzheimer’s, Parkinson’s, and Creutzfeldt-Jakob diseases. We have synthesized amyloid fibres from lysozyme through its incubation in acidic solution below pH 4 and by allowing to form amyloid fibres at elevated temperature. To quantify the binding affinities of the squaraine dyes 1-3 with β-amyloids, we have carried out the isothermal titration calorimetric (ITC) measurements. The association constants were determined and are found to be 1.2 × 105, 3.6× 105 and 3.2 × 105 M-1 for the dyes, 1-3, respectively. To gain more insights into the amyloid inhibiting nature of the squaraine dyes under investigations, we have carried out thioflavin assay, CD, isothermal titration calorimetry and microscopic analysis. The addition of the dyes 1-3 (5μM) led to the complete quenching in the apparent thioflavin fluorescence, thereby indicating the destabilization of β-amyloid fibres in the presence of the squaraine dyes. Further, the inhibition of the amyloid fibres by the squaraine dyes 1-3, has been evidenced though the DLS, TEM AFM and SAED, wherein we observed the complete destabilization of the amyloid fibre and transformation of the fibre into spherical particles of ca. These results demonstrate the fact that the squaraine dyes 1-3 can act as protein labeling agents as well as the inhibitors of the protein amyloidogenesis. The last chapter of the thesis describes the synthesis and investigation of selfassembly as well as bio-imaging aspects of a few novel tetraphenylethene conjugates 4-6.Expectedly, these conjugates showed significant solvatochromism and exhibited a hypsochromic shift (negative solvatochromism) as the solvent polarity increased, and these observations were justified though theoretical studies employing the B3LYP/6-31g method. We have investigated the self-assembly properties of these D-A conjugates though variation in the percentage of water in acetonitrile solution due to the formation of nanoaggregates. Further the contour map of the observed fluorescence intensity as a function of the fluorescence excitation and emission wavelength confirmed the formation of J-type aggregates in these cases. To have a better understanding of the type of self-assemblies formed from the TPE conjugates 4-6, we have carried out the morphological analysis through various microscopic techniques such as DLS, SEM and TEM. 70%, we observed rod shape architectures having ~ 780 nm in diameter and ~ 12 μM in length as evidenced through TEM and SEM analysis. We have made similar observations with the dodecyl conjugate 5 at ca. 70% and 50% water/acetonitrile mixtures, the aggregates formed from 4 and 5 were found to be highly crystalline and such structures were transformed to amorphous nature as the water fraction was increased to 99%. To evaluate the potential of the conjugate as bio-imaging agents, we have carried out their in vitro cytotoxicity and cellular uptake studies though MTT assay, flow cytometric and confocal laser scanning microscopic techniques. Thus nanoparticle of these conjugates which exhibited efficient emission, large stoke shift, good stability, biocompatibility and excellent cellular imaging properties can have potential applications for tracking cells as well as in cell-based therapies. In summary we have synthesized novel functional organic chromophores and have studied systematic investigation of self-assembly of these synthetic and biological building blocks under a variety of conditions. The investigation of interaction of water soluble NIR squaraine dyes with lysozyme indicates that these dyes can act as the protein labeling agents and the efficiency of inhibition of β-amyloid indicate, thereby their potential as anti-amyloid agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since 1999, with the adoption of expansion policy in higher education by the Chinese government, enrollment and graduate numbers have been increasing at an unprecedented speed. Accustomed to a system in which university graduates were placed, many students are not trained in “selling themselves”, which exacerbates the situation leading to a skyrocketing unemployment rate among new graduates. The idea of emphasizing career services comes with increasing employment pressure among university graduates in recent years. The 1998 “Higher Education Act” made it a legislative requirement. Thereafter, the Ministry of Education issued a series of documents in order to promote the development of career services. All higher education institutions are required to set up special career service centers and to set a ratio of 1:500 between career staff and the total number of students. Related career management courses, especially career planning classes, are required to be clearly included as specific modules into the teaching plan with a requirement of no less than 38 sessions in one semester at all universities. Developing career services in higher education has thus become a hot issue. One of the more notable trends in higher education in recent years has been the transformation of university career service centers from merely being the coordinators of on-campus placement into full service centers for international career development. The traditional core of career services in higher education had been built around guidance, information and placements (Watts, 1997). This core was still in place, but the role of higher education career services has changed considerably in recent years and the nature of each part is being transformed (Watts, 1997). Most services are undertaking a range of additional activities, and the career guidance issue is emphasized much more than before. Career management courses, especially career planning classes, are given special focus in developing career services in the Chinese case. This links career services clearly and directly with the course provision function. In China, most career service centers are engaging in the transformation period from a “management-oriented” organization to a “service-oriented” organization. Besides guidance services, information services and placement activities, there is a need to blend them together with the new additional teaching function, which follows the general trend as regulated by the government. The role of career services has been expanding and this has brought more challenges to its development in Chinese higher education. Chinese universities still remain in the period of exploration and establishment in developing their own career services. In the face of the new situation, it is very important and meaningful to explore and establish a comprehensive career services system to address student needs in the universities. A key part in developing this system is the introduction of career courses and delivering related career management skills to the students. So there is the need to restructure the career service sectors within the Chinese universities in general. The career service centers will operate as a hub and function as a spoke in the wheel of this model system, providing support and information to staff located in individual teaching departments who are responsible for the delivery of career education, information, advice and guidance. The career service centers will also provide training and career planning classes. The purpose of establishing a comprehensive career services system is to provide a strong base for student career development. The students can prepare themselves well in psychology, ideology and ability before employment with the assistance of effective career services. To conclude, according to the different characteristics and needs of students, there will be appropriate services and guidance in different stages and different ways. In other words, related career services and career guidance activities would be started for newly enrolled freshmen and continue throughout their whole university process. For the operation of a comprehensive services system, there is a need for strong support by the government in the form of macro-control and policy guarantee, but support by the government in the form of macro-control and policy guarantee, but also a need for close cooperation with the academic administration and faculties to be actively involved in career planning and employment programs. As an integral function within the universities, career services must develop and maintain productive relationships with relevant campus offices and key stakeholders both within the universities and externally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internationalization of higher education has become one of the most important policies for institutions of higher education worldwide. Though universities are international by nature, the need for intensified quality activities of international nature has promoted internationalization to be under spotlight of researchers, administrators and policy makers and to be an area for research. Each institution follows its certain way to govern its international affairs. Most Universities, especially in the 'Developed World' started to plan it strategically. This study explores the meanings and importance of internationalization especially that it means different things to different people. It also studies the rationales behind internationalizing higher education. It focuses on the four main prevailing rationales; political, cultural/social, economic/financial, and academic on both national and institutional levels. With the increasing need to strategically plan, the study explores internationalization strategies in terms of how to develop them, what are their approaches and types, and their components and dimensions. Damascus University has witnessed an overwhelming development of its international relations and activities. Therefore, it started to face a problem of how to deal with this increasing load especially that its International Office is the only unit that deals with the international issues. In order to study the internationalization phenomenon at Damascus University, the 2WH approach, which asks the what, why, and how questions, is used and in order to define the International Office's role in the internationalization process of the University, it studies it and the international offices of Kassel University, and Humboldt University in Germany, The University of Jordan, and Al Baath University in Syria using the 'SOCIAL' approach that studies and analyses the situation, organization, challenges, involvement, ambitions, and limitations of these offices. The internationalization process at the above-mentioned Universities is studied and compared in terms of its meaning, rationales for both the institution and its academic staff, challenges and strategic planning. Then a comparison is made among the international offices of the Universities to identify their approaches, what led to their success and what led to their failure in their practices. The aim is to provide Damascus University and its International Office with some good practices and, depending on the experiences of the professionals of the case-studies, a suggested guidance to the work of this Office and the University in general is given. The study uses the interviews with the different officials and stakeholders of the case-studies as the main method of collecting the information in addition to site visits, studying their official documents and their websites. The study belongs to qualitative research that has an action dimension in it since the recommendations will be applied in the International Office. The study concludes with few learned lessons for Damascus University and its International Office depending on the comparison that was done according to a set of dimensions. Finally a reflection on the relationship between internationalization of higher education and politics, the impact of politics on Middle Eastern Universities, and institutional internationalization strategies are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expansion of rubber tree plantations and agricultural mechanization caused a decline of swamp buffalo numbers in the Naban River National Nature Reserve (NRNNR), Yunnan Province, China. We analysed current use of buffaloes for field work and the recent development of the regional buffalo population, based on interviews with 184 farmers in 2007/2008 and discussions with 62 buffalo keepers in 2009. Three types of NRNNR farms were distinguished, differing mainly in altitude, area under rubber, and involvement in livestock husbandry. While pig based farms (PB; n=37) have abandoned buffalo keeping, 11% of the rubber based farms (RB; n=71) and 100% of the livestock-corn based farms (LB; n=76) kept buffaloes in 2008. Herd size was 2.5 +/-1.80 (n=84) buffaloes in early 2008 and 2.2 +/-1.69 (n=62) in 2009. Field work on own land was the main reason for keeping buffaloes (87.3 %), but lending work buffaloes to neighbours (79.0%) was also important. Other purposes were transport of goods (16.1%), buffalo trade (11.3%) and meat consumption (6.4%). Buffalo care required 6.2 +/-3.00 working hours daily, while annual working time of a buffalo was 294 +/-216.6 hours. The area ploughed with buffaloes remained constant during the past 10 years despite an expansion of land cropped per farm. Although further replacement of buffaloes by tractors occurs rapidly, buffaloes still provide cheap work force and buffer risks on poor NRNNR farms. Appropriate advice is needed for improved breeding management to increase the efficiency of buffalo husbandry and provide better opportunities for buffalo meat sale in the region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tsunoda et al. (2001) recently studied the nature of object representation in monkey inferotemporal cortex using a combination of optical imaging and extracellular recordings. In particular, they examined IT neuron responses to complex natural objects and "simplified" versions thereof. In that study, in 42% of the cases, optical imaging revealed a decrease in the number of activation patches in IT as stimuli were "simplified". However, in 58% of the cases, "simplification" of the stimuli actually led to the appearance of additional activation patches in IT. Based on these results, the authors propose a scheme in which an object is represented by combinations of active and inactive columns coding for individual features. We examine the patterns of activation caused by the same stimuli as used by Tsunoda et al. in our model of object recognition in cortex (Riesenhuber 99). We find that object-tuned units can show a pattern of appearance and disappearance of features identical to the experiment. Thus, the data of Tsunoda et al. appear to be in quantitative agreement with a simple object-based representation in which an object's identity is coded by its similarities to reference objects. Moreover, the agreement of simulations and experiment suggests that the simplification procedure used by Tsunoda (2001) is not necessarily an accurate method to determine neuronal tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Each player in the financial industry, each bank, stock exchange, government agency, or insurance company operates its own financial information system or systems. By its very nature, financial information, like the money that it represents, changes hands. Therefore the interoperation of financial information systems is the cornerstone of the financial services they support. E-services frameworks such as web services are an unprecedented opportunity for the flexible interoperation of financial systems. Naturally the critical economic role and the complexity of financial information led to the development of various standards. Yet standards alone are not the panacea: different groups of players use different standards or different interpretations of the same standard. We believe that the solution lies in the convergence of flexible E-services such as web-services and semantically rich meta-data as promised by the semantic Web; then a mediation architecture can be used for the documentation, identification, and resolution of semantic conflicts arising from the interoperation of heterogeneous financial services. In this paper we illustrate the nature of the problem in the Electronic Bill Presentment and Payment (EBPP) industry and the viability of the solution we propose. We describe and analyze the integration of services using four different formats: the IFX, OFX and SWIFT standards, and an example proprietary format. To accomplish this integration we use the COntext INterchange (COIN) framework. The COIN architecture leverages a model of sources and receivers’ contexts in reference to a rich domain model or ontology for the description and resolution of semantic heterogeneity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scitable is an open online teaching/learning portal combining high quality educational articles authored by editors at NPG with technology-based community features to fuel a global exchange of scientific insights, teaching practices, and study resources. Scitable currently contains articles in the field of genetics, and is intended for college undergraduate faculty and students. Future plans involve extension of Scitable to other fields within the life sciences, as well as to other audiences. Scitable brings together a library of scientific overviews with a worldwide community of scientists, researchers, teachers and students. Nature Education is a new division of Nature Publishing Group devoted to facilitating high quality, innovative, accessible science education in all countries of the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theory of reciprocity is predicated on the assumption that people are willing to reward nice or kind acts and to punish unkind ones. This assumption raises the question as to how to define kindness. In this paper we offer a new definition of kindness that we call “blame-freeness.” Put most simply, blame-freeness states that in judging whether player i has been kind or unkind to player j in a social situation, player j would have to put himself in the strategic position of player i, while retaining his preferences, and ask if he would have acted in a manner that was worse than i did under identical circumstances. If j would have acted in a more unkind manner than i acted, then we say that j does not blame i for his behavior. If, however, j would have been nicer than i was, then we say that “j blames i” for his actions (i’s actions were blameworthy). We consider this notion a natural, intuitive and empirically relevant way to explain the motives of people engaged in reciprocal behavior. After developing the conceptual framework, we then test this concept in a laboratory experiment involving tournaments and find significant support for the theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electron hole transfer (HT) properties of DNA are substantially affected by thermal fluctuations of the π stack structure. Depending on the mutual position of neighboring nucleobases, electronic coupling V may change by several orders of magnitude. In the present paper, we report the results of systematic QM/molecular dynamic (MD) calculations of the electronic couplings and on-site energies for the hole transfer. Based on 15 ns MD trajectories for several DNA oligomers, we calculate the average coupling squares 〈 V2 〉 and the energies of basepair triplets X G+ Y and X A+ Y, where X, Y=G, A, T, and C. For each of the 32 systems, 15 000 conformations separated by 1 ps are considered. The three-state generalized Mulliken-Hush method is used to derive electronic couplings for HT between neighboring basepairs. The adiabatic energies and dipole moment matrix elements are computed within the INDO/S method. We compare the rms values of V with the couplings estimated for the idealized B -DNA structure and show that in several important cases the couplings calculated for the idealized B -DNA structure are considerably underestimated. The rms values for intrastrand couplings G-G, A-A, G-A, and A-G are found to be similar, ∼0.07 eV, while the interstrand couplings are quite different. The energies of hole states G+ and A+ in the stack depend on the nature of the neighboring pairs. The X G+ Y are by 0.5 eV more stable than X A+ Y. The thermal fluctuations of the DNA structure facilitate the HT process from guanine to adenine. The tabulated couplings and on-site energies can be used as reference parameters in theoretical and computational studies of HT processes in DNA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquesta tesi forma part d'un projecte destinat a predir el rendiment acadèmic dels estudiants de doctorat portat a terme per l'INSOC (International Network on Social Capital and Performance). El grup de recerca INSOC està format per les universitats de Girona (Espanya), Ljubljana (Eslovènia), Giessen (Alemanya) i Ghent (Bèlgica). El primer objectiu d'aquesta tesi és desenvolupar anàlisis quantitatius comparatius sobre el rendiment acadèmic dels estudiants de doctorat entre Espanya, Eslovènia i Alemanya a partir dels resultats individuals del rendiment acadèmic obtinguts de cada una de les universitats. La naturalesa internacional del grup de recerca implica la recerca comparativa. Vam utilitzar variables personal, actitudinals i de xarxa per predir el rendiment. El segon objectiu d'aquesta tesi és entendre de manera qualitativa perquè les variables de xarxa no ajuden quantitativament a predir el rendiment a la universitat de Girona (Espanya). En el capítol 1, definim conceptes relacionats amb el rendiment i donam un llistat de cada una de les variables independents (variables de xarxa, personals i actitudinals), resumint la lliteratura. Finalment, explicam com s'organitzen els estudis de doctorat a cada un dels diferents països. A partir d'aquestes definicions teòriques, en els pròxims capítols, primer presentarem els qüestionaris utilitzats a Espanya, Eslovènia i Alemanya per mesurar aquests diferents tipus de variables. Després, compararem les variables que són relevants per predir el rendiment dels estudiants de doctorat a cada país. Després d'això, fixarem diferents models de regressió per predir el rendiment entre països. En tots aquests models les variables de xarxa fallen a predir el rendiment a la Universitat de Girona. Finalment, utilitzem estudis qualitatius per entendre aquests resultats inesperats. En el capítol 2, expliquem com hem dissenyat i conduït els qüestionaris en els diferents països amb l'objectiu d'explicar el rendiment dels estudiants de doctorat obtinguts a Espanya, Eslovènia i Alemanya. En el capítol 3, cream indicadors comparables però apareixen problemes de comparabilitat en preguntes particulars a Espanya, Eslovènia i Alemanya. En aquest capítol expliquem com utilitzem les variables dels tres països per crear indicadors comparables. Aquest pas és molt important perquè el principal objectiu del grup de recerca INSOC és comparar el rendiment dels estudiants de doctorat entre els diferents països. En el capítol 4 comparem models de regressió obtinguts de predir el rendiment dels estudiants de doctorat a les universitats de Girona (Espanya) i Eslovènia. Les variables són característiques dels grups de recerca dels estudiants de doctorat enteses com una xarxa social egocèntrica, característiques personals i actitudinals dels estudiants de doctorat i algunes carecterístiques dels directors. Vam trobar que les variables de xarxa egocèntriques no predien el rendiment a la Universitat de Girona. En el capítol 5, comparem dades eslovenes, espanyoles i alemnayes, seguint la metodologia del capítol 4. Concluïm que el cas alemany és molt diferent. El poder predictiu de les variables de xarxa no millora. En el capítol 6 el grup de recerca dels estudiants de doctorat és entès com una xarxa duocèntrica (Coromina et al., 2008), amb l'objectiu d'obtendre informació de la relació mútua entre els estudiants i els seus directors i els contactes d'ambdós amb els altres de la xarxa. La inclusió de la xarxa duocèntrica no millora el poder predictiu del model de regressió utilitzant les variales egocèntriques de xarxa. El capítol 7 pretèn entendre perquè les variables de xarxa no predeixen el rendiment a la Universitat de Girona. Utilitzem el mètode mixte, esperant que l'estudi qualitatiu pugui cobrir les raons de perquè la qualitat de la xarxa falla en la qualitat del treball dels estudiants. Per recollir dades per l'estudi qualitatiu utilitzem entrevistes en profunditat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nature and magnitude of climatic variability during the period of middle Pliocene warmth (ca 3.29–2.97 Ma) is poorly understood. We present a suite of palaeoclimate modelling experiments incorporating an advanced atmospheric general circulation model (GCM), coupled to a Q-flux ocean model for 3.29, 3.12 and 2.97 Ma BP. Astronomical solutions for the periods in question were derived from the Berger and Loutre BL2 astronomical solution. Boundary conditions, excluding sea surface temperatures (SSTs) which were predicted by the slab-ocean model, were provided from the USGS PRISM2 2°×2° digital data set. The model results indicate that little annual variation (0.5°C) in SSTs, relative to a ‘control’ experiment, occurred during the middle Pliocene in response to the altered orbital configurations. Annual surface air temperatures also displayed little variation. Seasonally, surface air temperatures displayed a trend of cooler temperatures during December, January and February, and warmer temperatures during June, July and August. This pattern is consistent with altered seasonality resulting from the prescribed orbital configurations. Precipitation changes follow the seasonal trend observed for surface air temperature. Compared to present-day, surface wind strength and wind stress over the North Atlantic, North Pacific and Southern Ocean remained greater in each of the Pliocene experiments. This suggests that wind-driven gyral circulation may have been consistently greater during the middle Pliocene. The trend of climatic variability predicted by the GCM for the middle Pliocene accords with geological data. However, it is unclear if the model correctly simulates the magnitude of the variation. This uncertainty is derived from, (a) the relative insensitivity of the GCM to perturbation in the imposed boundary conditions, (b) a lack of detailed time series data concerning changes to terrestrial ice cover and greenhouse gas concentrations for the middle Pliocene and (c) difficulties in representing the effects of ‘climatic history’ in snap-shot GCM experiments.