Big Data Java Automation Engineer - Pune - Pitney Bowes
Pitney Bowes as the company that open the jobs vacancy, have some qualification and spesification especially for the Big Data Java Automation Engineer jobs vacancy. To find out more information and about qualification and spesification details, walkin interview schedule, the address of the company, the company contact info (email/phone number) of Pitney Bowes company, please start to apply for the job vacancy with fill the jobs application with click the 'Apply This Job' button below.
JoinPitney Bowes as Java & Data QA Automation Engineer Taleo ID: 164395 .Years ofExperience required- 4 to 6 years Job Location-Pune
As a QA , you will be part of the Big Dataand Analytics Platform Team. The Team has built a multi-tenant Amazon Platformusing several technologies including S3, IAM, EMR, Spark, Kinesis, Redshift,Dynamo, R and other technologies. This is enabling Business Units, Clients andPartners to publish data catalog, store the data in common Data Lake and datascientists to develop new value offerings and productize them for monetizablevalue.
- Testing in Hadoop, Amazon Web services,Hive/Hbase, Pig/Map reduce, Java, Java scripting based ecosystems.
- Working on the cutting edge ofa wide range of innovative AWS uses cases and AWS Big Data Solutions includingS3, EMR, Spark, Redshift and other integration tools.
- Collaborate across teams todevelop solution based home grown automated tests and ensure feedback fromdifferent stake holders in incorporated.
- Creating end to end test plan;executing the plan and managing all activities in the plan to ensure that allobjectives are met and that the solution works as expected.
- Solution to be tested forfunctionality, performance, reliability, stability, and compatibility withother legacy or external systems.
- Ensuring that every phase andfeature of the software solution is tested and any potential issue is identifiedand fixed before product goes live.
- Understanding the data qualityissues related to volumes of data and define the required testing methods fordata flows and ETL processing.
- Automating the dataverification process for huge volumes of data from source data; workflow
Pitney Bowes, a global technologycompany, provides shipping & mailing solutions, data management software,and location intelligence offerings, powering billions of physical and digitaltransactions in the connected and borderless world of commerce. Helpingclients achieve their greatest commerce potential are Pitney Bowes' 16,000+passionate employees around the world, our relentless pursuit of innovationwith over 2,300 active patents, and our focus on clients, who are at the centerof all that we do - from small businesses to 90% of the Fortune 500.
Know more about us-
Who we are- https://www.youtube.com/watch?v=DDHx_0OV6D4&list=PL06B89F1DD8A5475A
Digital Commerce- https://youtu.be/T_dTWGQH7Cs
Location Intelligence Domain- https://www.youtube.com/watch?v=XouclG_puWs
Pitney Bowes, Great Place to Work- https://www.youtube.com/watch?v=-fhcBkm2S9A
Life at Pitney Bowes- https://www.youtube.com/watch?v=b6sqC1w_cGk
Pitney Bowes Diversity- https://www.youtube.com/watch?v=osYLqcU2JsQ
Pitney Bowes India Leadership- https://www.youtube.com/watch?v=H2cuGAjpDAI
- UG - B.Tech/B.E. OR PG - M.S. / M.Tech REC or above
- Deep knowledge of automated test frameworks/test methodologies and testing experience in Hadoop/AWS, Hive/Pig and Java components.
- 2+ experience with major big data technologies and frameworks including but not limited to Hadoop, Map Reduce, Pig, Hive, HBase, Oozie, Mahout, Flume, Zookeeper, Mongo DB, and/or NOSQL Databases
- Should have tested codes which were successfully implemented into production.
- Familiar with Rest API and Web Services testing using SOAP UI.
- Experience in testing ETL or any data Processing Workflow having complex business logic by writing test scripts.
- Experience in non-function testing using Jmeter.
- Construct and execute positive/negative test cases in order to preempt and arrest all bugs within the QA environment.
- Experience w/ Amazon Redshift or similar large scale data warehousing systems such as Vertica, Aster, Teradata, and Netezza.
- Test and report defects on all project management artifacts. Ability to work in an onsite-offsite model.
- Work in tandem with developers, leads and PM, to understand requirement/design, understand the scope/schedule of work and construct test cases/execute test cases /report bugs to code fixes.
- Knowledge of one or more scripting languages to create automated test frameworks for Big Data solutions.
- Experience w/ one or more of the following relational databases: MySQL, PostgreSQL, Oracle, SQL Server.
- Experience w/ Amazon Web Services technologies like S3, SQS, EMR, Dynamo, etc.
- Experience in identifying issues related to distributed systems, data movement, reliability, performance etc.
- Proven track record of web/Cloud/ETL/Big data quality engineering.