864 resultados para Step and flash imprint lithography


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imprint varies: Huron, S.D., <1950>-1954.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most potent known naturally occurring Bowman-Birk inhibitor, sunflower trypsin inhibitor-1 (SFTI-1), is a bicyclic 14-amino acid peptide from sunflower seeds comprising one disulfide bond and a cyclic backbone. At present, little is known about the cyclization mechanism of SFTI-1. We show here that an acyclic permutant of SFTI-1 open at its scissile bond, SFTI-1[ 6,5], also functions as an inhibitor of trypsin and that it can be enzymatically backbone-cyclized by incubation with bovine beta-trypsin. The resulting ratio of cyclic SFTI-1 to SFTI1[6,5] is similar to9:1 regardless of whether trypsin is incubated with SFTI-1[ 6,5] or SFTI-1. Enzymatic resynthesis of the scissile bond to form cyclic SFTI-1 is a novel mechanism of cyclization of SFTI-1[ 6,5]. Such a reaction could potentially occur on a trypsin affinity column as used in the original isolation procedure of SFTI-1. We therefore extracted SFTI-1 from sunflower seeds without a trypsin purification step and confirmed that the backbone of SFTI-1 is indeed naturally cyclic. Structural studies on SFTI-1[ 6,5] revealed high heterogeneity, and multiple species of SFTI-1[ 6,5] were identified. The main species closely resembles the structure of cyclic SFTI-1 with the broken binding loop able to rotate between a cis/trans geometry of the I7-P8 bond with the cis conformer being similar to the canonical binding loop conformation. The non-reactive loop adopts a beta-hairpin structure as in cyclic wild-type SFTI-1. Another species exhibits an isoaspartate residue at position 14 and provides implications for possible in vivo cyclization mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report on the development of an ultraviolet curable hydrogel, based on combinations of poly(ethylene glycol) dimethacrylate (PEGMA), acrylic acid (AA) and N-Isopropylacrylamide (NIPPAm) for imprint lithography processes. The hydrogel was successfully imprinted to form dynamic microlens arrays. The response rate of the microlenses by volume change to water absorption was studied optically showing tunable focalisation of the light. Important optical refractive index change was measured between the dry and wet state of the microlenses. Our work suggests the use of this newly developed printable hydrogel for various imprinted components for sensing and imaging systems. © 2013 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an assessment of the technical and economic performance of thermal processes to generate electricity from a wood chip feedstock by combustion, gasification and fast pyrolysis. The scope of the work begins with the delivery of a wood chip feedstock at a conversion plant and ends with the supply of electricity to the grid, incorporating wood chip preparation, thermal conversion, and electricity generation in dual fuel diesel engines. Net generating capacities of 1–20 MWe are evaluated. The techno-economic assessment is achieved through the development of a suite of models that are combined to give cost and performance data for the integrated system. The models include feed pretreatment, combustion, atmospheric and pressure gasification, fast pyrolysis with pyrolysis liquid storage and transport (an optional step in de-coupled systems) and diesel engine or turbine power generation. The models calculate system efficiencies, capital costs and production costs. An identical methodology is applied in the development of all the models so that all of the results are directly comparable. The electricity production costs have been calculated for 10th plant systems, indicating the costs that are achievable in the medium term after the high initial costs associated with novel technologies have reduced. The costs converge at the larger scale with the mean electricity price paid in the EU by a large consumer, and there is therefore potential for fast pyrolysis and diesel engine systems to sell electricity directly to large consumers or for on-site generation. However, competition will be fierce at all capacities since electricity production costs vary only slightly between the four biomass to electricity systems that are evaluated. Systems de-coupling is one way that the fast pyrolysis and diesel engine system can distinguish itself from the other conversion technologies. Evaluations in this work show that situations requiring several remote generators are much better served by a large fast pyrolysis plant that supplies fuel to de-coupled diesel engines than by constructing an entire close-coupled system at each generating site. Another advantage of de-coupling is that the fast pyrolysis conversion step and the diesel engine generation step can operate independently, with intermediate storage of the fast pyrolysis liquid fuel, increasing overall reliability. Peak load or seasonal power requirements would also benefit from de-coupling since a small fast pyrolysis plant could operate continuously to produce fuel that is stored for use in the engine on demand. Current electricity production costs for a fast pyrolysis and diesel engine system are 0.091/kWh at 1 MWe when learning effects are included. These systems are handicapped by the typical characteristics of a novel technology: high capital cost, high labour, and low reliability. As such the more established combustion and steam cycle produces lower cost electricity under current conditions. The fast pyrolysis and diesel engine system is a low capital cost option but it also suffers from relatively low system efficiency particularly at high capacities. This low efficiency is the result of a low conversion efficiency of feed energy into the pyrolysis liquid, because of the energy in the char by-product. A sensitivity analysis has highlighted the high impact on electricity production costs of the fast pyrolysis liquids yield. The liquids yield should be set realistically during design, and it should be maintained in practice by careful attention to plant operation and feed quality. Another problem is the high power consumption during feedstock grinding. Efficiencies may be enhanced in ablative fast pyrolysis which can tolerate a chipped feedstock. This has yet to be demonstrated at commercial scale. In summary, the fast pyrolysis and diesel engine system has great potential to generate electricity at a profit in the long term, and at a lower cost than any other biomass to electricity system at small scale. This future viability can only be achieved through the construction of early plant that could, in the short term, be more expensive than the combustion alternative. Profitability in the short term can best be achieved by exploiting niches in the market place and specific features of fast pyrolysis. These include: •countries or regions with fiscal incentives for renewable energy such as premium electricity prices or capital grants; •locations with high electricity prices so that electricity can be sold direct to large consumers or generated on-site by companies who wish to reduce their consumption from the grid; •waste disposal opportunities where feedstocks can attract a gate fee rather than incur a cost; •the ability to store fast pyrolysis liquids as a buffer against shutdowns or as a fuel for peak-load generating plant; •de-coupling opportunities where a large, single pyrolysis plant supplies fuel to several small and remote generators; •small-scale combined heat and power opportunities; •sales of the excess char, although a market has yet to be established for this by-product; and •potential co-production of speciality chemicals and fuel for power generation in fast pyrolysis systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zinc oxide and graphene nanostructures are important technological materials because of their unique properties and potential applications in future generation of electronic and sensing devices. This dissertation investigates a brief account of the strategies to grow zinc oxide nanostructures (thin film and nanowire) and graphene, and their applications as enhanced field effect transistors, chemical sensors and transparent flexible electrodes. Nanostructured zinc oxide (ZnO) and low-gallium doped zinc oxide (GZO) thin films were synthesized by a magnetron sputtering process. Zinc oxide nanowires (ZNWs) were grown by a chemical vapor deposition method. Field effect transistors (FETs) of ZnO and GZO thin films and ZNWs were fabricated by standard photo and electron beam lithography processes. Electrical characteristics of these devices were investigated by nondestructive surface cleaning, ultraviolet irradiation treatment at high temperature and under vacuum. GZO thin film transistors showed a mobility of ∼5.7 cm2/V·s at low operation voltage of <5 V and a low turn-on voltage of ∼0.5 V with a sub threshold swing of ∼85 mV/decade. Bottom gated FET fabricated from ZNWs exhibit a very high on-to-off ratio (∼106) and mobility (∼28 cm2/V·s). A bottom gated FET showed large hysteresis of ∼5.0 to 8.0 V which was significantly reduced to ∼1.0 V by the surface treatment process. The results demonstrate charge transport in ZnO nanostructures strongly depends on its surface environmental conditions and can be explained by formation of depletion layer at the surface by various surface states. A nitric oxide (NO) gas sensor using single ZNW, functionalized with Cr nanoparticles was developed. The sensor exhibited average sensitivity of ∼46% and a minimum detection limit of ∼1.5 ppm for NO gas. The sensor also is selective towards NO gas as demonstrated by a cross sensitivity test with N2, CO and CO2 gases. Graphene film on copper foil was synthesized by chemical vapor deposition method. A hot press lamination process was developed for transferring graphene film to flexible polymer substrate. The graphene/polymer film exhibited a high quality, flexible transparent conductive structure with unique electrical-mechanical properties; ∼88.80% light transmittance and ∼1.1742Ω/sq k sheet resistance. The application of a graphene/polymer film as a flexible and transparent electrode for field emission displays was demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes in the demographic structure of American families have highlighted the need to reevaluate fatherhood. Research illustrates that paternal involvement positively affects child development, but father absence has increased due to rising rates of divorce, cohabitation, and non-marital childbirth. There is evidence that other male figures can function as effective father surrogates. However, information is limited, particularly with respect to female development. ^ This study examined differences in well-being, achievement, and paternal support among girls in four father categories: (a) Biological Father, (b) Step-Father, (c) Surrogate Father, and (d) No Father. Maternal support, economic hardship, and life stressors were included as potential covariates. Interviews were conducted with an ethnically and economically diverse sample of 694 sixth and eighth grade children. The sample included boys to assess the extent to which the findings were unique to girls. Measures included quantitative and qualitative support from father figures and indices of self-esteem, loneliness, and depression. Standardized test scores and classroom grades were also obtained from school records. ^ Girls with biological fathers had higher achievement test scores than girls in the other father categories, but there were no other differences related to the presence or absence of a father-figure. Biological fathers also provided greater quantitative and qualitative support than step- and surrogate fathers. Surrogate fathers provided a greater amount but lower quality of support than step-fathers. ^ Girls who received lower levels of support from biological fathers reported lower self-esteem and greater loneliness, compared to fatherless girls and those receiving low support from other father figures, suggesting that low support from biological fathers may be especially distressing. On the other hand, girls with low biological father support had higher achievement scores compared to fatherless girls and those who received low support from step- and surrogate fathers. Thus, the mere presence of the biological father appears to facilitate achievement, regardless of the level of support he provides. ^ This study highlights the supportive characteristics of different father figures and their influence on well-being and achievement in females. Future research should focus on the dynamics of surrogate father relationships and the specific characteristics that differentially affect developmental outcomes. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wird die Herstellung von miniaturisierten NIR-Spektrometern auf Basis von Fabry-Pérot (FP) Filter Arrays behandelt. Bisher ist die kostengünstige Strukturierung von homogenen und vertikal erweiterten Kavitäten für NIR FP-Filter mittels Nanoimprint Technologie noch nicht verfügbar, weil die Qualität der Schichten des Prägematerials unzureichend ist und die geringe Mobilität der Prägematerialien nicht ausreicht, um die vertikal erweiterten Kavitäten zu füllen. Diese Arbeit konzentriert sich auf die Reduzierung des technischen Aufwands zur Herstellung von homogenen und vertikal erweiterten Kavitäten. Zur Strukturierung der Kavitäten wird ein großflächiger substratkonformer UV-Nanoimprint Prozess (SCIL - Substrate Conformal Imprint Lithoghaphy) verwendet, der auf einem Hybridstempel basiert und Vorteile von harten und weichen Stempeln vereint. Um die genannten Limitierungen zu beseitigen, werden alternative Designs der Kavitäten untersucht und ein neues Prägematerial eingesetzt. Drei Designlösungen zur Herstellung von homogenen und erweiterten Kavitäten werden untersucht und verglichen: (i) Das Aufbringen des Prägematerials mittel mehrfacher Rotationsbeschichtung, um eine höhere Schichtdicke des Prägematerials vor dem Prägeprozess zu erzeugen, (ii) die Verwendung einer hybriden Kavität bestehend aus einer strukturierten Schicht des Prägematerials eingebettet zwischen zwei Siliziumoxidschichten, um die Schichtdicke der organischen Kavität zu erweitern und (iii) die Optimierung des Prägeprozesses durch Verwendung eines neuen Prägematerials. Die mit diesen drei Ansätzen hergestellten FP-Filter Arrays zeigen, hohe Transmissionen (beste Transmission > 90%) und kleine Linienbreiten (Halbwertsbreiten <5 nm).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem addressed in this thesis is that a considerable proportion of students around the world attend school in inadequate facilities, which is detrimental for the students’ learning outcome. The overall objective in this thesis is to develop a methodology, with a novel approach to involve teachers, to generate a valuable basis for decisions regarding design and improvement of physical school environment, based on the expressed needs for a specific school, municipality, or district as well as evidence from existing research. Three studies have been conducted to fulfil the objective: (1) a systematic literature review and development of a theoretical model for analysing the role of the physical environment in schools; (2) semi structured interviews with teachers to get their conceptions of the physical school environment; (3) a stated preference study with experimental design as an online survey. Wordings from the transcripts from the interview study were used when designing the survey form. The aim of the stated preference study was to examine the usability of the method when applied in this new context of physical school environment. The result is the methodology with a mixed method chain where the first step involves a broad investigation of the specific circumstances and conceptions for the specific school, municipality, or district. The second step is to use the developed theoretical model and results from the literature study to analyse the results from the first step and transform them in to a format that fits the design of a stated preference study. The final step is a refined version of the procedure of the performed stated preference study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Freeze drying technology can give good quality attributes of vegetables and fruits in terms of color, nutrition, volume, rehydration kinetics, stability during storage, among others, when compared with solely air dried ones. However, published scientific works showed that treatments applied before and after air dehydration are effective in food attributes, improving its quality. Therefore, the hypothesis of the present thesis was focus in a vast research of scientific work that showed the possibility to apply a pre-treatment and a post-treatment to food products combined with conventional air drying aiming being close, or even better, to the quality that a freeze dried product can give. Such attributes are the enzymatic inactivation, stability during storage, drying and rehydration kinetics, color, nutrition, volume and texture/structure. With regard to pre-treatments, the ones studied along the present work were: water blanching, steam blanching, ultrasound, freezing, high pressure and osmotic dehydration. High electric pulsed field was also studied but the food attributes were not explained on detailed. Basically, water and steam blanching showed to be adequate to inactivate enzymes in order to prevent enzymatic browning and preserve the product quality during long storage periods. With regard to ultrasound pre-treatment the published results pointed that ultrasound is an effective pre-treatment to reduce further drying times, improve rehydration kinetics and color retention. On the other hand, studies showed that ultrasound allow sugars losses and, in some cases, can lead to cell disruption. For freezing pre-treatment an overall conclusion was difficult to draw for some food attributes, since, each fruit or vegetable is unique and freezing comprises a lot of variables. However, for the studied cases, freezing showed to be a pre-treatment able to enhance rehydration kinetics and color attributes. High pressure pre-treatment showed to inactivate enzymes improving storage stability of food and showed to have a positive performance in terms of rehydration. For other attributes, when high pressure technology was applied, the literature showed divergent results according with the crops used. Finally, osmotic dehydration has been widely used in food processing to incorporate a desired salt or sugar present in aqueous solution into the cellular structure of food matrix (improvement of nutrition attribute). Moreover, osmotic dehydration lead to shorter drying times and the impregnation of solutes during osmose allow cellular strengthens of food. In case of post-treatments, puffing and a new technology denominated as instant controlled pressure drop (DIC) were reported in the literature as treatments able to improve diverse Abstract Effect of Pre-treatments and Post-treatments on Drying Products x food attributes. Basically, both technologies are similar where the product is submitted to a high pressure step and the process can make use of different heating mediums such as CO2, steam, air and N2. However, there exist a significant difference related with the final stage of both which can comprise the quality of the final product. On the other hand, puffing and DIC are used to expand cellular tissues improving the volume of food samples, helping in rehydration kinetics as posterior procedure, among others. The effectiveness of such pre and/or post-treatments is dependent on the state of the vegetables and fruits used which are also dependent of its cellular structure, variety, origin, state (fresh, ripe, raw), harvesting conditions, etc. In conclusion, as it was seen in the open literature, the application of pre-treatments and post-treatments coupled with a conventional air dehydration aim to give dehydrated food products with similar quality of freeze dried ones. Along the present Master thesis the experimental data was removed due to confidential reasons of the company Unilever R&D Vlaardingen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Prenatal hydronephrosis (PNH) is dilation in urinary collecting system and is the most frequent neonatal urinary tract abnormality with an incidence of 1% to 5% of all pregnancies. PNH is defined as anteroposterior diameter (APD) of renal pelvis ≥ 4 mm at gestational age (GA) of < 33 weeks and APD ≥ 7 mm at GA of ≥ 33 weeks to 2 months after birth. All patients need to be evaluated after birth by postnatal renal ultrasonography (US). In the vast majority of cases, watchful waiting is the only thing to do; others need medical or surgical therapy. Objectives: There is a direct relationship between APD of renal pelvis and outcome of PNH. Therefore we were to find the best cutoff point APD of renal pelvis which leads to surgical outcome. Patients and Methods: In this retrospective cohort study we followed 200 patients 1 to 60 days old with diagnosis of PNH based on before or after birth ultrasonography; as a prenatal or postnatal detected, respectively. These patients were referred to the nephrology clinic in Zahedan Iran during 2011 to 2013. The first step of investigation was a postnatal renal US, by the same expert radiologist and classifying the patients into 3 groups; normal, mild/moderate and severe. The second step was to perform voiding cystourethrogram (VCUG) for mild/moderate to severe cases at 4 - 6 weeks of life. Tc-diethylene triamine-pentaacetic acid (DTPA) was the last step and for those with normal VCUG who did not show improvement in follow-up examination, US to evaluate obstruction and renal function. Finally all patients with mild/moderate to severe PNH received conservative therapy and surgery was preserved only for progressive cases, obstruction or renal function ≤35%. All patients’ data and radiologic information was recorded in separate data forms, and then analyzed by SPSS (version 22). Results: 200 screened PNH patients with male to female ratio 3.5:1 underwent first postnatal control US, of whom 65% had normal, 18% mild/moderate and 17% severe hydronephrosis. 167 patients had VCUG of whom 20.82% with VUR. 112 patients performed DTPA with following results: 50 patients had obstruction and 62 patients showed no obstructive finding. Finally 54% of 200 patients recovered by conservative therapy, 12.5% by surgery and remaining improved without any surgical intervention. Conclusions: The best cutoff point of anteroposterior renal pelvis diameter that led to surgery was 15 mm, with sensitivity 88% and specificity 74%.