Science
The process of taking images using UAV's to identify crack in a pavement

Question Description

My professor is asking me to concentrate of the process of taking the images using UAV to identify the cracks in a pavement.((i.e the types of cameras used in the UAV's ,the height in which the UAV needs to travel,number if pictures to be taken, speed of the UAV, pixal density, etc)) and also asked me use words such as image synchronization, image overlapping, ETC


The file attached is a journal which can be taken as a reference.

Unformatted Attachment Preview

International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 4 Issue 02, February-2015 Photogrammetry Image Processing for Mapping by UAV Nijandan S, Mr. Mahendran S. M.Tech., Gokulakrishnan G, Nagendra prasad R, M.Tech., Avionics Engineering, School of Aeronautical Sciences, Hindustan University, Chennai, India. Mr.P.S.B.Kirubakaran. M.Tech., Assistant Professor, School of Aeronautical Sciences, Hindustan University, Chennai, India. Abstract—Basic goal of this project is to image processing for the surveillance area in the low cost by using UAV. The image processing generates the mapping for the particular area. For the purpose of surveillance in terrain surface here the mapping is implemented with the photogrammetric tool. UAV having the more efficient for flying in all-weather condition with most protective reinforcement composite material. UAV is designed with more efficient for long endurance and range. Here the high efficient perspective, depth camera is used to mapping the terrain surface. The depth camera is depends upon the wide lens and also with the integration program. The autopilot is used for automatic controls for the aerial vehicle and the microprocessor is used to guide the mission plan for mapping. The mapping process generates and gives the output in form of metric images. During mapping the lightning condition are automatically guided by the integration of the camera with GPS. The camera is purposefully integrated with the GPS for the locating a surface or land. The output is examined by the tool and it is finally evaluated with the metric images. Keywords— Photogrammetry image, Autopilot system, Image Processing, Mapping. I. INTRODUCTION In photogrammetric application the image processing are too efficient for mapping process. The UAV are starting to represent a larger importance in the aerospace sector due to the fact that it can execute the wide range of the mission. The development of optic electronics, Nano technology and composite materials make UAV projects innovative. In mapping the images are in the form of 2d later the images are converted into 3d by using the phogrammetry tool. The mapping tool which gives the accurate and perfect outfit for the image in metric scale analyse. For the mapping process the photographic methods are to be followed to capture the images in the certain range with the shutter correction ratings. The multiple images are captured and later the tool will mosaic the images for the output process. The necessary steps for designing and executing an aerial image acquisition mission are far from being well defined. If we isolate the involved products one by one, we would see impressive specifications: High resolution DSLR or compact cameras, navigation systems with fast CPUs, reliable GPS units and radio receivers. The integration of autopilot with the camera, microcontroller, microprocessor and powerplant are placed in the fabricated UAV. The process started from the autopilot by triggering the camera to take images with the integrated GPS and the shutter timing is adjusted over triggering camera. To IJERTV4IS020558 sustain the vibration in the integrated board the autopilot is fine-tuned and it supported with the magnetometer. The telemetry is used for processing by receiving and transmitting the data from board to GCS. By continuing the process the image processing data are collected and it is evaluate with the photogrammetry tool. Finally, found a very cheap and easy way to acquire aerial images. A good satellite imagery was too expensive with big temporarily resolution suffering by high percentage of cloudiness and the pure conventional aerial mission, was also expensive. But those UAVs, equipped with autopilots which allowed them to fly on a much predefined path, was looking very promising. Now, after many years of developing and improvement we have arrived to the view of acquiring aerial images using unmanned small robust and lightweight airplanes, UAV. II. FLIGHT CONTROL SYSTEM FOR UAV UAV autopilot system is a close loop control system, which comprises of two parts like observer and controller. The most common is observer the micro inertial guidance system including gyro, acceleration and magnetometer. The readings are combined with the GPS information can be passed to a filter to generates the estimates of the current states for controls. The control systems will guide the UAV with its observation. In FCS the input controls are calibrated from maximum to minimum with the PID controller. The PID controller will generates the values by the calibration process as well as prediction of controls. The telemetry system will collect the data by receiving and also sending the data by transferring from the FCS. FCS will guide by the GCS or calibrated PID values. Figure 1. Mission control system www.ijert.org (This work is licensed under a Creative Commons Attribution 4.0 International License.) 803 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 4 Issue 02, February-2015 source the storage control are also corrected and calibrated for the mission process Figure 2. Calibrating the PID Controls III. AUTO PILOT SYSTEM The result of flight planning was a carefully defined flight path. As input for the UAV navigation software GCS a text file was generated containing the status, image acquisition point numbers and coordinate system as well as parameter for flying velocity. Here the calibration remains constant until the flight control system end the process. From the FCS the mission got started to fly against the wind and the mission setup in RTL method. Figure 3. Flight control system for UAV The APM board is the autopilot for the UAV mission, in this mission the multitasking information are proceed according the priority of the datum. The board which having the connector pin for telemetry, transmitter channel as input and output as servo control. In this the servo will govern over the autopilot system. An autopilot is a MEMS system used to guide the UAV without assistance from human operators, consisting of both hardware and its supporting software. Autopilot systems are now widely used in modern aircrafts and ships. The objective of UAV autopilot systems is to consistently guide UAVs to follow reference paths, or navigate through some waypoints. A powerful UAV autopilot system can guide UAVs in all stages including take-off, ascent, descent, trajectory following, and landing. The autopilot needs to communicate with ground station for control mode switch, receive broadcast from GPS satellite for position updates and send out control inputs to the servo motors on UAVs. There are also other attitude determination devices available like infrared or vision based ones. The sensor readings combined with GPS information can be passed to a filter to generate the estimates of the current states for later control uses. Based on different control strategies, the UAV autopilots can be categorized to PID based autopilots, NN based autopilots and other robust autopilot. The calibrating controls are manipulated with the system and it integrated according to the system. The camera also integrated over the autopilot by the gimbal setting to hold the camera in stability factor. The autopilot will guide along with the waypoint navigation for image processing. For image processing mission planner will create the waypoints in the orthophoto method r oblique method. In orthophoto the parallel waypoints are plotted to guide the mission. Open source autopilot will helpful in the correction of the board to replace the some corrective control for detection. In open IJERTV4IS020558 Figure 4. Ardu pilot (autopilot) IV. CAMERA INTEGRATION WITH GPS Here the camera integration states that the camera integrated with the GPS for triggering and communication purposes. The integrated camera is connected with the autopilot system and also the gimbal panel. Gimbal panel keeps the camera in the stability manner. This stability adjustment will give the images in efficient and depth clarity. Shutter timing is adjusted here for the process of taking frequent images. The frequent images [1] are more accurate for mosaic process in photogrammetry tool. Wider lens will give the more depth and clarity images according to the latitude and longitude correction. In this camera the wider angle is more than 24mm. CMOS and TFT are occurred in this camera for depth purposes. The shutter timing is adjusted for the frequent images, here the program explained that the progress of camera in wider, stowed and snapped. The adjustment of camera specification will give the continue proceeding of images. Overall the camera is integrated and pinned with the autopilot. Figure 5. Cannon powershoot s100 @param o @default @param i @default @param s @default Zoom-Wide o 100 Zoom-frequent i 30 Zoom-snap s 10 while 1 do k = get_usb_power until k>0 if k < 5 then gosub "ch1up" if k > 4 and k < 8 then gosub "ch1mid" if k > 7 and k < 11 then gosub "ch1down" if k > 10 and k < 14 then gosub "ch2up" if k > 13 and k < 17 then gosub "ch2mid" if k > 16 and k < 20 then gosub "ch2down" if k > 19 then print "error" wend end www.ijert.org (This work is licensed under a Creative Commons Attribution 4.0 International License.) 804 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 4 Issue 02, February-2015 :ch1up print "Ch1Up-snap"; k set_zoom s shoot sleep 1000 return V. IMAGE PROCESSING Image processing for 3d images have upto now only been used in addition to taken images from the ground. The marked land surface is fully image processed by the fabricated UAV. In this the aerial image is the input and it process along with the parameters. Then the manual measurement and automatic generation of images are executed. Then the generation of image is bundled along with the tool and it implemented for the 3d processing. :ch1mid print "Ch1Mid-frequent"; k set_zoom i sleep 1000 return Certain procedures for following image processing are, :ch1down print "Ch1Down-Wide"; k set_zoom o sleep 1000 return :ch2up return :ch2mid return :ch2down return CHDK script integrated with camera for GPS synchronization and it gives the settings to integrate with lat and long, here the script is tuned along with the position vector and also with corrected angle. Then in mission planner along with the GPS integration the waypoints are listed out with the lat and long correction. The waypoints are pointed along with the home point in polygon plotting manner. In this plotting the continuous waypoints are given in the RTL mode. Mission startup from the initial stage and it ends up with the same point because of the mode plan. Here the script will track the image along with directory of images are been followed. Then the camera is corrected for image processing. Each image having the set of steps, they are tracking of image-Directory of image-Satellite Timing-Camera time.  The tracking of image is along with the mission plan and it corrected with lat and long correction.  Multiple image matching  Image matching primitives  Image matching parameters  Redundancy image matching  Image surface modeling  Image mosaicking [3] The processing of image in the orthophoto pattern in this pattern the several amount of the point is marked in the pattern orientation. The pattern having the parallel waypoint pattern in this the images are captured multiple variation in the fraction of shutter timing. The shutter timing is depends upon the frame per rate. The frames are to be calculated for each and every image. The frames are depend upon the ground resolution with the 4cm with each respective image. The images are taken frequently by the camera with the help of the mission as well as the UAV. The images are snapped and saved in the memory devices and also the GCS. The GCS guide the system upto it reach the destination. The waypoint navigation guide the mission along with the system. Using mapping software the following version of orthophotos of UAV is produced for data analysis:  An orthophoto covering the marked site with the ground resolution of the 4cm and the 24cm grid size.  Multiple orthophoto of the marked site with the ground resolution of the 2cm and the 20cm grid size.  One orthophoto of the best preserved the marked site with the ground resolution of 4cm and the 10cm grid size.  Directory of image is along with the kind of surface, how to be concerned and relevant with the processing.  Satellite timing gives the correction of the image processing.  Camera timing meant by the shutter timing and camera inclination. Figure 7. Orthophoto pattern Figure 6. Waypoints for land surface. IJERTV4IS020558 www.ijert.org (This work is licensed under a Creative Commons Attribution 4.0 International License.) 805 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 4 Issue 02, February-2015 In this pattern the waypoint shows the multiple parallel lines to grap the images. The multiple lines gives the multiple images with the inclination of the 45deg for each image in the frame per rate. This images are captured along with the ground resolution and also with the factor of the surface mapping. This image are to be implemented in the mapper tool to be estimated in the 3d resolution. The 2d images are converted to 3d by using the mapper tool. The tool which is able to convert the image in the high resolution factor. VI. From the tool simulation the exact output is verified and rectified. This are the process of the conversion of image by using the tool. TOOL SIMULATION Here the tool is simulated with the captured image, this images are implemented to the tool for the conversion. The conversion is followed to be in different steps, they are captured image-Scanning of multiple image-Rectification of image-stitching and mosaic of images-conversion of images. Figure 9. Stitched image converted in 3d  Captured images-this images are in the 2d platform with the correction of the [2] lat and long errors. In this the images are been in multiple mode, that itself select the clear image for the rectification.  Scanning of multiple image- scanning in the sense the images are in the mode of scanning to select the clear image from the multiple images. This scanning itself eliminates the blurred images and it is thoroughly verified.  Rectification of images- In this the images are rectified along with the clear image. The inclination images are matching with one another and it corresponds to the original image. Likewise entire marked area is evaluated and rectified under this process.  Stitching and mosaic- the aerial images are scanned and verified with the pattern rectification. Now the images are to be stitched and mosaic by the merging process. It meant that the entire marked area is to be mosaic and the stitched image is evalued.  Conversion of image- The images are converted by stitching and mosaic process. Now the image are in 3d mapping image. Figure 8. Stitched and mosaic image IJERTV4IS020558 VII. CONCLUSION The autonomous UAV system is used for this terrain land surface with high expectations. It is too comfort in taking images and generated the images according to the way points. In this particular system the autopilot is corrected by placing the magnetometer needle for the purpose of direction indication. So it absorbs the vibration and it gives excellent mission throughout the mission completed. In low cost this mission achieved high endurance and range, the endurance depends on how long it flies and range depends on how much distance covered. So both the statements are achieved in it. Then the system is worked fast, efficient and accurate for the needed mission. In this image processing also achieved in expected manner. The 3d images are much opted output for this mapping project, the kind of output delivery is achieved. The image processing done in restricted area due to some regulation in flying UAVs. The restricted area is put on mission surveillance for mapping a marked land surface. In this mission the certain pattern are allowed to follow up the process to be completed with the expected mission. Further this surveillance mission will be used for large land surface to get the mission mapping for particular arena. In this the depth mapping is important concern to be noted because the certain terrain surface is even in all the sides it may have some uneven surface. For this inconvenience the depth image processing are to be taken for investigation and over this the project mission is extended. For this case the hardware component are improved by changing the zooming lens and increasing the shutter timing [5]. Then other improvements are orthophoto image pattern will be change and also the wider inclination in zooming camera. Meanwhile the GPS correction and observer also taken care for this mission progress. Normally, the software tool taking its own time for stitching and mosaic process.So the software tool is to be updated by automatically rectified the scanning process and also the improvement in stitching and mosaic process. So the time consumption is less for this stitching process. This is the improvement which is followed to be under this project. Highly complex terrain land surface is difficult to access, was recorded in just a day of field work by two new systems that exceed surveying method by accuracy, density and acquisition www.ijert.org (This work is licensed under a Creative Commons Attribution 4.0 International License.) 806 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 4 Issue 02, February-2015 time. Image processing allowed the elaboration and visualization of the 3d mapping. The UAV system, the flight planning and the image processing method presented here are therefore powerful tool for recording and mapping other land surface. [2] Travelletti, J., Oppikofer, T., Delaourt, C., Malet, J.-P., Jaboyedoff, M., 2008. “Monitoring landslide displacements during a controlled rain experiment using a long-range terrestrial laser scanning (TLS)”, Remote Sensing and Spatial Information Sciences, Beijing, China, Vol. 37, Part B5, pp. 485-490. [3] A. Ntregka, A. Georgopoulos, M. Santana Quintero, 2013, “Photogrammetric Exploitation Of Hdr Images For Cultural Heritage Documentation”, International CIPA Symposium Remote Sensing and Spatial Infor mation Sciences, Volume II-5. [4] Grenzdorffer, G. (2004), “The integrated Digital Remote Sensing System-XX ISRPS”, congress 12.7.-23.7.2004,IstanbulVol.XXXV,Part B., Commission 1, pp.235-239. ACKNOWLEDGMENT It is my extreme pleasure that we owe indebt gratitude for my parents, friends and well-wishers valuable support, suggestions and encouragement. H. Poor, “An Introduction to Signal Detection and Estimation”. New York: Springer-Verlag, 1985, ch 4. REFERENCES [1] Anuar Ahmad, 2005,”Digital Photogrammetry An Experience Of Processing Aerial Photograph Of Utm Acquired Using Digital Camera”, 15(85): 107-122. I’ m Nagendra Prasad R, pursuing M.Tech - Avionics Engineering in Hindustan University. I know that in future UAVs are one of those who will rule the world. So, I decided to work in to this. NAGENDRA PRASAD R I’m Gokulakrishnan G, pursuing M.Tech - in Hindustan University with the specialization of Avionics Engineering. This paper shows that how much I’m crazy about drones. Also I have designed and fabricated many of fixed wing aircrafts as a hobbyist. GOKULAKRISHNAN G I’m Nijandan S, studying Master Degree in Avionics engineering from Hindustan University. I did my Under Graduate in Electrical so I have special interest in dealing with avionic system of drones. NIJANDAN S IJERTV4IS020558 www.ijert.o ...
Purchase answer to see full attachment

Final Answer

attached is my answer

The Process of Taking the Images Using UAV to Identify the Cracks in a Pavement
Student’s Name
Course’s Name
Professor’s Name
Institution
Due Date

The Process of Taking the Images Using UAV to Identify the Cracks in a Pavement
Abstract
Pavement condition evaluation is a fundamental bit of present-day pavement
administration frameworks as restoration methodologies are arranged in view of its
results. For appropriate assessment of existing asphalt, they should be constantly and
viably checked utilizing useful means. Ordinarily, truck-based asphalt observing
frameworks have been being used in surveying the rest of the life of in-benefit asphalts.
Albeit such frameworks deliver exact outcomes, their utilization can be costly and
information preparing can be tedious, which make them infeasible thinking about the
interest for speedy asphalt assessment. To conquer such issues, UAVs can be utilized as
an option as they are moderately less expensive and simpler to apply. In this examination,
we propose a UAV based asphalt break distinguishing proof framework for checking
unbending asphalts' current conditions. The framework comprises of as of late presented
picture preparing calculations utilized together with regular machine learning strategies,
both of which are utilized to perform discovery of splits on inflexible asphalts' surface
and their grouping. Through picture preparing, the unmistakable highlights of marked
split bodies are first gotten from the UAV based pictures and after that utilized for
preparing of a model, and its execution was surveyed with a field consider performed
along an unbending asphalt presented to low activity and genuine temperature changes.
Accessible splits were characterized utilizing the UAV based framework and acquired
outcomes show it guarantees a decent elective answer for asphalt observing applications.
Introduction

Over the most recent two decades, with the assistance of improvements in
abnormal state handling strategies to extricate data from the pictures and detecting
advances to catch pictures productively and precisely under different lighting conditions,
asphalt surface observing have been brought to cutting-edge levels. Utilizing such
prevalent devices and systems, the visual examination of asphalts turns out to be
currently considerably simpler and dependable contrasted with using ordinary techniques,
which requires a monstrous measure of human work, prompting less precise
investigations and with the capability of delivering more one-sided comes about.
Considering that most of the above advances to create master level asphalt split
distinguishing proof frameworks are constrained to adaptable asphalts, there is as yet a
requirement for applying these innovations to inflexible asphalts. In light of this thought,
in this paper, we propose a UAV based pavement crack recognizable proof framework
for observing inflexible asphalts' current conditions in view of the mix of picture
handling procedures and machine learning.
The types of cameras used in the UAV
The type of cameras used in the UAV is the Draganflyer E4 Helicopter. The
current UAV accessible for use by Louisiana State University specialists for storm harm
picture accumulation is a remote-controlled, Styrofoam-based, display plane furnished
with a solitary advanced camera mounted to the side of the plane's body. The plane can
travel to heights over 1,500 feet and can be remotely controlled from over a mile away. A
turn tube is mounted to the conservative keeping in mind the end goal to decide the flight
speed. On the left wing, a front aligned camera is mounted to help control the plane when
it is out of the client's sight. Screens demonstrating the in-flight and picture catching

camera sees are accessible to aid flight and picture quality. Notwithstanding the current
UAV plane, a less demanding to work Draganflyer E4 Helicopter will be acquired to
investigate the utilization of a UAV picture accumulation technique. This UAV will be
furnished with a high determination advanced camera for use in getting post sea tempest
occasion pictures. An open correspondence API will enable access to telemetry, flight
control, height, and move, pitch, and yaw points. Prearranged flight designs can be
accomplished utilizing the UAV's GPS situating abilities. Since this UAV depends on a
helicopter stage, the conceivable take-off and landing areas are essentially expanded.
Watercraft based launchings and arrivals are currently a particular probability,
subsequently taking into account bog and swampland arrangements. The picture of
Draganflyer E4 Helicopter used to identify the cracks in a pavement is sketched below:

Another type of camera used to identify cracks in the pavement can be Hasselblad
X1D. Hasselblad's medium-design reduced MILC speaks to a stage forward in imaging
innovation. The medium-organize sensor offers a field of view no other minimal camera
can contend with the way things are, and the cost – while essentially higher than alternate
cameras sketched out here – will conceivably offer an incentive for cash for higher-end
UAV experts. Presently just two focal points are accessible. Nonetheless, the additional

field of view managed by the medium-organize sensor will probably compensate for this
absence of a decision, with compelling central lengths of 35mm and 70mm. Weighing a
little more than 1kg with a focal point joined, this is as yet handy to put on some lowpayload UAVs. Further to this alternative, DJI – maker of the Phantom UAV
arrangement – has reported an association with Hasselblad to offer a bundle including the
Matrice-600 UAV joined with Hasselblad's A5D medium-design camera utilizing a
Ronin-MX mount. While the focal point set for the A5D (Hasselblad H-mount) is unique
in relation to that of the X1D, this package offers an 'out-of-the-case' medium-design
UAV framework which might be more advantageous for clients. The picture of
Hasselblad X1D used to identify the cracks in a pavement can be shown below:

The height in which the UAV needs to travel
The height in which the UAV needs to travel should range from 0.5 m to 3 m. In
fact, one of the principal points of interest of applying a UAV for identifying a crack in a
pavement is the capacity to obtain information in out of reach zones where foliation and
cracks may shift their state of mind and other major geometric qualities (Fukuhara et. al.,
2014). Flight designs can be altered to achieve any tallness over the ground with the goal
that perceptions are enhanced and impediments are maintained a strategic distance from.

UAV information accumulation is non-intrusive, sheltered and economical contrasted
with TLS and helicopter studies (no team required). UAV studies can be finished rapidly.
The created DSM from 3D point mists and orthophotos are effectively translated and can
be overseen in a GIS environment. An extensive number of highlights can be precisely
mapped both in 2D and 3D, with extraordinary adaptability in data editing. The picture
for the speed of UAV used to identify the cracks in a pavement is shown below:

The number of pictures to be taken
The number of pictures to be taken is approximately more than 100 pavement
photos. The UAV was studied at a later date by high precision situating overview
strategies. The ground control point markings were painted all over the site with no less
than five focuses per site and each point in no less than three pictures. A cross was
resolved from past tasks to be the best image as it is effortlessly identifiable in post
handling. With UAV denoted, a flight design was then made. This comprised of making a
matrix to catch pictures with a 75% cover between pictures. This was finished by
drawing a scaled network on the UAV delineate the control gadget. A spreadsheet was

produced to decide the required matrix dividing to accomplish a 75% cover in view of the
elevation of the UAV and the camera sensors. Once a network was made the flight was
finished to catch vertical pictures at every lattice crossing point. The strategy of drawing
a network on the make's application was required in light of the fact that an outsider
flight arranging application was not refreshed to incorporate the P3P. Once the pictures
were caught for the formation of a 3D demonstrate, diagram pictures were caught.
Vertical review and sideways point of view pictures were gathered at a maximum height
of 90m to catch a site diagram, too bring down elevations pictures of site particular subtle
elements were caught. At last, on select areas, the video was caught of the site to
reproduce an auto driving out (Ferguson and Waugh, 2015). This was distinguished to be
important from the beginning of the task and accordingly, no activity was postponed as a
result of low batteries. During the initial two areas, the matrix was made and did not
reach out finished the whole region of intrigue or was on the contrary side of the purpose
of intrigue. This blunder happened on the grounds that few of the washouts were situated
in remote zones that did not have a discernable land includes that showed up on the
interface outline. Consequently, it was resolved that it would be more productive to catch
the outline pictures before making the network. It was gainful to catch diagram pictures
in the first place since it improved the network advancement process by distinguishing
the limits of site amid the outline flight. It was later established that the video taken ought
to have been caught in 1080p quality. It was likewise an issue now and again that the
video would end up debased and was later resolved to be an equipment issue with the
UAV. Sideways pictures of the downstream region would have been helpful, these
pictures were caught in later areas when a standard flight design was created. After the

primary day of information accumulation, it was chosen that displays would include an
incentive for the documentation of the site to give ground level representation points of
interest. At significant destinations, maybe a couple displays were caught nearby.
Displays were taken at the edge of the washout along the street and in the event that it
was achievable a scene was taken at stream level where the course had beforehand
existed.
The speed of the UAV
The speed of the UAV should range from 3 m/s to 10 m/s. Once the choice was
made to utilize a UAV to gather information, the conditions and limitations of UAV tasks
were surveyed to verify that it was conceivable to utilize a UAV for the surveillance
venture. The significant constraints included: climate conditions, activity and directions,
and assembly....

Ace_Tutor (5136)
University of Maryland

Anonymous
I was on a very tight deadline but thanks to Studypool I was able to deliver my assignment on time.

Anonymous
The tutor was pretty knowledgeable, efficient and polite. Great service!

Anonymous
I did not know how to approach this question, Studypool helped me a lot.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors