964 resultados para Data exchange formats
Resumo:
International research shows that low-volatility stocks have beaten high-volatility stocks in terms of returns for decades on multiple markets. This abbreviation from traditional risk-return framework is known as low-volatility anomaly. This study focuses on explaining the anomaly and finding how strongly it appears in NASDAQ OMX Helsinki stock exchange. Data consists of all listed companies starting from 2001 and ending close to 2015. Methodology follows closely Baker and Haugen (2012) by sorting companies into deciles according to 3-month volatility and then calculating monthly returns for these different volatility groups. Annualized return for the lowest volatility decile is 8.85 %, while highest volatility decile destroys wealth at rate of -19.96 % per annum. Results are parallel also in quintiles that represent larger amount of companies and thus dilute outliers. Observation period captures financial crisis of 2007-2008 and European debt crisis, which embodies as low main index annual return of 1 %, but at the same time proves the success of low-volatility strategy. Low-volatility anomaly is driven by multiple reasons such as leverage constrained trading and managerial incentives which both prompt to invest in risky assets, but behavioral matters also have major weight in maintaining the anomaly.
Resumo:
Background. The value of respiratory variables as weaning predictors in the intensive care unit (ICU) is controversial. We evaluated the ability of tidal volume (Vtexp), respiratory rate ( f ), minute volume (MVexp), rapid shallow breathing index ( f/Vt), inspired–expired oxygen concentration difference [(I–E)O2], and end-tidal carbon dioxide concentration (PE′CO2) at the end of a weaning trial to predict early weaning outcomes. Methods. Seventy-three patients who required .24 h of mechanical ventilation were studied. A controlled pressure support weaning trial was undertaken until 5 cm H2O continuous positive airway pressure or predefined criteria were reached. The ability of data from the last 5 min of the trial to predict whether a predefined endpoint indicating discontinuation of ventilator support within the next 24 h was evaluated. Results. Pre-test probability for achieving the outcome was 44% in the cohort (n¼32). Non-achievers were older, had higher APACHE II and organ failure scores before the trial, and higher baseline arterial H+ concentrations. The Vt, MV, f, and f/Vt had no predictive power using a range of cut-off values or from receiver operating characteristic (ROC) analysis. The [I–E]O2 and PE′CO2 had weak discriminatory power [areaunder the ROC curve: [I–E]O2 0.64 (P¼0.03); PE′CO2 0.63 (P¼0.05)]. Using best cut-off values for [I–E]O2 of 5.6% and PE′CO2 of 5.1 kPa, positive and negative likelihood ratios were 2 and 0.5, respectively, which only changed the pre- to post-test probability by about 20%. Conclusions. In unselected ICU patients, respiratory variables predict early weaning from mechanical ventilation poorly.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
As academic student mobility is increasing, improving the functionality of international operations is recognised as a competitive advantage at tertiary education institutions. Although many scholars have researched the experiences of exchange students, the role of student tutors and their contribution to exchange students’ experiences is still an unknown factor. This research examines international tutoring at the University of Turku, and aims to understand better the way tutoring contributes to exchange experiences and to explore the functionality of the tutor system and discover areas for improvements. To achieve these goals, the research seeks to answer the fundamental research question: What is the role of tutors in mediating exchange experiences? The theoretical framework combines literature on mediating exchange experiences, the phenomenon of studying abroad, the process of adaptation, the importance of cross-cultural communication, and the role of student tutors as mediators. Based on the literature review, a theoretical model for studying the mediation of exchange experiences is introduced. The model’s applicability and validity is examined through a case study. Three methods were used in the empirical research: surveys, participant observations, and interviews. These methods provided extensive data from three major parties of the tutor system: tutors, exchange students, and the international office. The findings of the research reveal that tutoring – instrumental leading and social and cultural mediating – generates both negative and positive experiences depending on the individuals’ expectations, motivations, relationships, and the nature of the tutoring. Although functional, there are a few weaknesses in the tutor system. Tutors tend to act as effective instrumental leaders, but often fail to create a friendship and contribute to the exchange students’ experience through social and cultural mediation, which is significantly more important in the exchange students’ overall experience in terms of building networks, adapting, gaining emotional experiences, and achieving the stage of personal development and mental change. Based on the weaknesses, three improvements are suggested: (1) increasing comprehensive sharing of information, effective communication, and collective cooperation, (2) emphasising the importance of social and cultural mediation and increasing the frequency of interaction between tutors and exchange students, and (3) improving the recruitment and training, revising the process of reporting and rewarding, and finally, enhancing services and coordination.
Resumo:
During the late Miocene, exchange between the Mediterranean Sea and Atlantic Ocean changed dramatically, culminating in the Messinian Salinity Crisis (MSC). Understanding Mediterranean-Atlantic exchange at that time could answer the enigmatic question of how so much salt built up within the Mediterranean, while furthering the development of a framework for future studies attempting to understand how changes may have impacted global thermohaline circulation. Due to their association with specific water masses at different scales, radiogenic Sr, Pb, and Nd isotope records were generated from various archives contained within marine deposits to endeavour to understand better late Miocene Mediterranean-Atlantic exchange. The archives used include foraminiferal calcite (Sr), fish teeth and bone (Nd), dispersed authigenic ferromanganese oxyhydroxides (Nd, Pb), and a ferromanganese crust (Pb). The primary focus is on sediments preserved at one end of the Betic corridor, a gateway that once connected the Mediterranean to the Atlantic through southern Spain, although other locations are investigated. The Betic gateway terminated within several marginal sub-basins before entering the Western Mediterranean; one of these is the Sorbas Basin, a well-studied location whose sediments have been astronomically tuned at high temporal resolution, providing the necessary age control for sub-precessional resolution records. Since the climatic history of the Mediterranean is strongly controlled by precessional changes in regional climate, the aim was to produce records at high (sub-precessional) temporal resolution, to be able to observe clearly any precessional cyclicity driven by regional climate which could be superimposed over longer trends. This goal was achieved for all records except the ferromanganese crust record. The 87Sr/86Sr isotope record (Ch. 3) shows precessional frequency excursions away from the global seawater curve. As precessional frequency oscillations are unexpected for this setting, a numerical box model was used to determine the mechanisms causing the excursions. To enable parameterisation of model variables, regional Sr characteristics, data from general circulation model HadCM3L, and new benthic foraminiferal assemblage data are employed. The model results imply that the Sorbas Basin likely had a positive hydrologic budget in the late Miocene, very different to that of today. Moreover, the model indicates that the mechanism controlling the Sr isotope ratio of Sorbas Basin seawater was not restriction, but a lack of density-driven exchange with the Mediterranean. Beyond improving our understanding of how marginal Mediterranean sub-basins may evolve different isotope signatures, these results have implications for astronomical tuning and stratigraphy in the region, findings which are crucial considering the geological and climatic history of the late Miocene Mediterranean is based entirely on marginal deposits. An improved estimate for the Nd isotope signature of late Miocene Mediterranean Outflow (MO) was determined by comparing Nd isotope signatures preserved in the deeper Alborán Sea at ODP Site 978 with literature data as well as the signature preserved in the Sorbas Basin (Ch. 4; -9.34 to -9.92 ± 0.37 εNd(t)). It was also inferred that it is unlikely that Nd isotopes can be used reliably to track changes in circulation within the shallow settings characteristic of the Mediterranean-Atlantic connections; this is significant in light of a recent publication documenting corridor closure using Nd isotopes. Both conclusions will prove useful for future studies attempting to understand changes in Mediterranean-Atlantic exchange. Excursions to high values, with precessional frequency, are also observed in the radiogenic Pb isotope record for the Sorbas Basin (Ch. 5). Widening the scope to include locations further away from the gateways, records were produced for late Miocene sections on Sicily and Northern Italy, and similar precessional frequency cyclicity was observed in the Pb isotope records for these sites as well. Comparing these records to proxies for Saharan dust and available whole rock data indicates that, while further analysis is necessary to draw strong conclusions, enhanced dust production during insolation minima may be driving the observed signal. These records also have implications for astronomical tuning; peaks in Pb isotope records driven by Saharan dust may be easier to connect directly to the insolation cycle, providing improved astronomical tuning points. Finally, a Pb isotope record derived using in-situ laser ablation performed on ferromanganese crust 3514-6 from the Lion Seamount, located west of Gibraltar within the MO plume, has provided evidence that plume depth shifted during the Pliocene. The record also suggests that Pb isotopes may not be a suitable proxy for changes in late Miocene Mediterranean-Atlantic exchange, since the Pb isotope signatures of regional water masses are too similar. To develop this record, the first published instance of laser ablation derived 230Thexcess measurements are combined with 10Be dating.
Resumo:
Variable Data Printing (VDP) has brought new flexibility and dynamism to the printed page. Each printed instance of a specific class of document can now have different degrees of customized content within the document template. This flexibility comes at a cost. If every printed page is potentially different from all others it must be rasterized separately, which is a time-consuming process. Technologies such as PPML (Personalized Print Markup Language) attempt to address this problem by dividing the bitmapped page into components that can be cached at the raster level, thereby speeding up the generation of page instances. A large number of documents are stored in Page Description Languages at a higher level of abstraction than the bitmapped page. Much of this content could be reused within a VDP environment provided that separable document components can be identified and extracted. These components then need to be individually rasterisable so that each high-level component can be related to its low-level (bitmap) equivalent. Unfortunately, the unstructured nature of most Page Description Languages makes it difficult to extract content easily. This paper outlines the problems encountered in extracting component-based content from existing page description formats, such as PostScript, PDF and SVG, and how the differences between the formats affects the ease with which content can be extracted. The techniques are illustrated with reference to a tool called COG Extractor, which extracts content from PDF and SVG and prepares it for reuse.
Resumo:
Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)
Resumo:
We evaluate the effectiveness of the Colombian Central Bank´s interventions in the foreign exchange market during the period 2000 to 2014 -- We examine the stochastic process that describes the exchange rate, with a focus on the detection of structural breaks or unit roots in the data to determine whether the Central Bank´s interventions were effective -- We find that the exchange rate can be described either by a random walk or by a trend-stationary model with multiple breaks -- In neither cases do we find any evidence that the exchange rate was affected by the Central Bank interventions
Resumo:
We provide a comprehensive study of out-of-sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited.
Resumo:
The International Space Station (ISS) requires a substantial amount of potable water for use by the crew. The economic and logistic limitations of transporting the vast amount of water required onboard the ISS necessitate onboard recovery and reuse of the aqueous waste streams. Various treatment technologies are employed within the ISS water processor to render the waste water potable, including filtration, ion exchange, adsorption, and catalytic wet oxidation. The ion exchange resins and adsorption media are combined in multifiltration beds for removal of ionic and organic compounds. A mathematical model (MFBMODEL™) designed to predict the performance of a multifiltration (MF) bed was developed. MFBMODEL consists of ion exchange models for describing the behavior of the different resin types in a MF bed (e.g., mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins) and an adsorption model capable of predicting the performance of the adsorbents in a MF bed. Multicomponent ion exchange ii equilibrium models that incorporate the water formation reaction, electroneutrality condition, and degree of ionization of weak acids and bases for mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins were developed and verified. The equilibrium models developed use a tanks-inseries approach that allows for consideration of variable influent concentrations. The adsorption modeling approach was developed in related studies and application within the MFBMODEL framework was demonstrated in the Appendix to this study. MFBMODEL consists of a graphical user interface programmed in Visual Basic and Fortran computational routines. This dissertation shows MF bed modeling results in which the model is verified for a surrogate of the ISS waste shower and handwash stream. In addition, a multicomponent ion exchange model that incorporates mass transfer effects was developed, which is capable of describing the performance of strong acid cation (SAC) and strong base anion (SBA) exchange resins, but not including reaction effects. This dissertation presents results showing the mass transfer model's capability to predict the performance of binary and multicomponent column data for SAC and SBA exchange resins. The ion exchange equilibrium and mass transfer models developed in this study are also applicable to terrestrial water treatment systems. They could be applied for removal of cations and anions from groundwater (e.g., hardness, nitrate, perchlorate) and from industrial process waters (e.g. boiler water, ultrapure water in the semiconductor industry).
Resumo:
The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric ($G_{E}$) and the magnetic ($G_{M}$) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton $\large(R = \frac{\sigma (e^{+}p)}{\sigma (e^{-}p)}\large)$. The ratio $R$ was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of $R$ on kinematic variables such as squared four-momentum transfer ($Q^{2}$) and the virtual photon polarization parameter ($\varepsilon$). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH$_{2}$) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the $Q^{2}$ dependence of $R$ at high $\varepsilon$ ($\varepsilon > $ 0.8) and the $\varepsilon$ dependence of $R$ at $\langle Q^{2} \rangle \approx 0.85$ GeV$^{2}$. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely reconciles the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors.
Resumo:
A novel route to prepare highly active and stable N2O decomposition catalysts is presented, based on Fe-exchanged beta zeolite. The procedure consists of liquid phase Fe(III) exchange at low pH. By varying the pH systematically from 3.5 to 0, using nitric acid during each Fe(III)-exchange procedure, the degree of dealumination was controlled, verified by ICP and NMR. Dealumination changes the presence of neighbouring octahedral Al sites of the Fe sites, improving the performance for this reaction. The so-obtained catalysts exhibit a remarkable enhancement in activity, for an optimal pH of 1. Further optimization by increasing the Fe content is possible. The optimal formulation showed good conversion levels, comparable to a benchmark Fe-ferrierite catalyst. The catalyst stability under tail gas conditions containing NO, O2 and H2O was excellent, without any appreciable activity decay during 70 h time on stream. Based on characterisation and data analysis from ICP, single pulse excitation NMR, MQ MAS NMR, N2 physisorption, TPR(H2) analysis and apparent activation energies, the improved catalytic performance is attributed to an increased concentration of active sites. Temperature programmed reduction experiments reveal significant changes in the Fe(III) reducibility pattern with the presence of two reduction peaks; tentatively attributed to the interaction of the Fe-oxo species with electron withdrawing extraframework AlO6 species, causing a delayed reduction. A low-temperature peak is attributed to Fe-species exchanged on zeolitic AlO4 sites, which are partially charged by the presence of the neighbouring extraframework AlO6 sites. Improved mass transport phenomena due to acid leaching is ruled out. The increased activity is rationalized by an active site model, whose concentration increases by selectively washing out the distorted extraframework AlO6 species under acidic (optimal) conditions, liberating active Fe species.
Resumo:
The key functional operability in the pre-Lisbon PJCCM pillar of the EU is the exchange of intelligence and information amongst the law enforcement bodies of the EU. The twin issues of data protection and data security within what was the EU’s third pillar legal framework therefore come to the fore. With the Lisbon Treaty reform of the EU, and the increased role of the Commission in PJCCM policy areas, and the integration of the PJCCM provisions with what have traditionally been the pillar I activities of Frontex, the opportunity for streamlining the data protection and data security provisions of the law enforcement bodies of the post-Lisbon EU arises. This is recognised by the Commission in their drafting of an amending regulation for Frontex , when they say that they would prefer “to return to the question of personal data in the context of the overall strategy for information exchange to be presented later this year and also taking into account the reflection to be carried out on how to further develop cooperation between agencies in the justice and home affairs field as requested by the Stockholm programme.” The focus of the literature published on this topic, has for the most part, been on the data protection provisions in Pillar I, EC. While the focus of research has recently sifted to the previously Pillar III PJCCM provisions on data protection, a more focused analysis of the interlocking issues of data protection and data security needs to be made in the context of the law enforcement bodies, particularly with regard to those which were based in the pre-Lisbon third pillar. This paper will make a contribution to that debate, arguing that a review of both the data protection and security provision post-Lisbon is required, not only in order to reinforce individual rights, but also inter-agency operability in combating cross-border EU crime. The EC’s provisions on data protection, as enshrined by Directive 95/46/EC, do not apply to the legal frameworks covering developments within the third pillar of the EU. Even Council Framework Decision 2008/977/JHA, which is supposed to cover data protection provisions within PJCCM expressly states that its provisions do not apply to “Europol, Eurojust, the Schengen Information System (SIS)” or to the Customs Information System (CIS). In addition, the post Treaty of Prüm provisions covering the sharing of DNA profiles, dactyloscopic data and vehicle registration data pursuant to Council Decision 2008/615/JHA, are not to be covered by the provisions of the 2008 Framework Decision. As stated by Hijmans and Scirocco, the regime is “best defined as a patchwork of data protection regimes”, with “no legal framework which is stable and unequivocal, like Directive 95/46/EC in the First pillar”. Data security issues are also key to the sharing of data in organised crime or counterterrorism situations. This article will critically analyse the current legal framework for data protection and security within the third pillar of the EU.
Resumo:
On May 25, 2018, the EU introduced the General Data Protection Regulation (GDPR) that offers EU citizens a shelter for their personal information by requesting companies to explain how people’s information is used clearly. To comply with the new law, European and non-European companies interacting with EU citizens undertook a massive data re-permission-request campaign. However, if on the one side the EU Regulator was particularly specific in defining the conditions to get customers’ data access, on the other side, it did not specify how the communication between firms and consumers should be designed. This has left firms free to develop their re-permission emails as they liked, plausibly coupling the informative nature of these privacy-related communications with other persuasive techniques to maximize data disclosure. Consequently, we took advantage of this colossal wave of simultaneous requests to provide insights into two issues. Firstly, we investigate how companies across industries and countries chose to frame their requests. Secondly, we investigate which are the factors that influenced the selection of alternative re-permission formats. In order to achieve these goals, we examine the content of a sample of 1506 re-permission emails sent by 1396 firms worldwide, and we identify the dominant “themes” characterizing these emails. We then relate these themes to both the expected benefits firms may derive from data usage and the possible risks they may experience from not being completely compliant to the spirit of the law. Our results show that: (1) most firms enriched their re-permission messages with persuasive arguments aiming at increasing consumers’ likelihood of relinquishing their data; (2) the use of persuasion is the outcome of a difficult tradeoff between costs and benefits; (3) most companies acted in their self-interest and “gamed the system”. Our results have important implications for policymakers, managers, and customers of the online sector.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.