Industry: Health insurance
Environment: Z/OS, Windows 7, Hadoop HDFS cluster with Mapr 5.1
Duration: Nov 2016 to till now
Role: Big data architect Big Data Platform as a service(BDPaas) enables the creation of more efficient, accurate, cost effective business analytics solutions enabling the enterprise to focus on delivering critical solution to make the overall healthcare system more efficient and help people lead healthier lives. It brings several best of breed Big Data technology and cloud technology together in highly scalable enterprises grade analytics platform. The platform includes foundational capabilities of data ingestion, data repository &processing, data discovery, search & visualization, data analytics, data integration &enrichment. In addition to the foundational capabilities , the platform also includes system & data security and operation & maintenance.
• Guiding the full lifecycle of a Hadoop solution including requirements analysis, data governance, capacity requirements, technical architecture design (including hardware, OS, network topology), application design, code review, testing, deployment and executes a process of moving the data from Mainframe to Hadoop environment.
• Working with development and quality assurance team.
• Build new tenant platforms on project request( ingesting data from datalake , data modeling, pig script, hive hql and hbase)
• Data processing using spark, pig scripts, shell scripts.
• Creating dashboard on Tableau and Elastic search with Kibana. Project-2
- Big data architect at United Health Group
- Big data architect at State farm Insurance
- Big data architect at State farm Insurance
- Infrastructure engineer at State farm Insurance
2 years, 4 months at this Job
- M.C.A - Master of Computer Applications
- B.B.A - Business administration
* Played the role of Azure big data architect for a manufacturing client and Provide technical direction and oversight to implementation team. * Spearheaded a complex Azure Cloud Transformation Initiative for manufacturing company. Defined enterprise azure based data and analytics architecture, finalized tools & technologies on Azure platform, defined naming conventions for Azure platform and defined code deployment process. * Implemented an Azure based big data lake analytics platform for Order and Inventory analytics for a manufacturing client Azure based data lake platform. * Performed source system analysis, data profile and defined ingestion and transformation pattern for structured and un structured data sources to Azure data lake, creating RAW, Staging and Curated zone within the azure data lake storage through Azure data lake analytics, Azure Data Factory and U-SQL for data transformation in the different layers of Data Lake. * Designed Polybase tables on top of Azure data lake storage for Azure data lake consumption and ad hoc transformations. * Implemented Real time analytics for Bill of Material data through Apche Spark and visualization * Designed and implemented Azure Analysis services tabular model in both live connection mode and direct query mode for semantic model creation to facilitate power BI dashboards and self-serve reporting. * Executed project delivery in DevOps mode, handled delivery through 18 member onsite offshore team.
- Azure Big Data Architect at Wipro Consulting
- AWS Big Data Architect and Solution Owner at Slalom Consulting
- Data Architect/Modeler at Client - IHI
- Roles - Big Data Architect at Cognizant Technology Solutions
9 months at this Job
- Bachelor of Engineering in Computer Science - Computer Science
Group: Analytics & Data Warehousing Team Employment Duration: May 2014 - Jan 2019 Responsibilities & Technology: § Analytics & Big Data Architect. Team Leader for Analytics Group. Responsible for overall planning, design and implementation of Oracle Business Intelligence Platform on Oracle Engineered Systems. Exalytics & Oracle Big Data Platform. § Lead in charge of design / implementation / customization of Oracle Business Intelligence Applications (BI APPS) 22.214.171.124 for Finance / Procurement & Spend / H.R Pre-built Analytics. § Customized out of the box Informatica ETL mappings (Source Extracts / Target Loads) for OBI Apps to fit business requirements. § Implemented and upgraded Primavera P6 Analytics (versions 8.x / 16.x) § Oracle Engineered Systems - Performance Tuning. (Exalytics / Exadata Systems). § Lead Oracle Data Integrator (ODI) Architect versions 11g/12c. Implemented High Availability architecture for ODI (Versions 11.1.1.x / 12.2.x). § Successful Implementation, on going delivery of Call Center Analytics based on Avaya Call Center. § Successful Implementation, delivery of General Ledger, Budget, Accounting Analytics. § Successful Implementation, delivery of Inventory, Work & Asset Management, & Customer Care Billing Analytics Modules. § Responsible for advanced pattern detection module design with use cases for Big Data Analytics. § Conduct Ongoing Training Sessions, Knowledge transfer of all projects to in house Technical resources.
- Analytics & Big Data Architect at Washington Suburban Sanitary Commission
- Principal Oracle Consultant/Instructor at 2.Oracle Corporation
- at American Express
- Noetix Views Administrator at United Airlines
4 years, 8 months at this Job
- M.B.A - International Business
Big Data architect for various clients Configuring/Building data pipelines for ingestion using big data tools like kafka /flume/sqoop/spark streaming. Analytics using formula based , machine learning algorithms : Spark R/Spark ML. Distrubuted computing /data base using Hadoop/Hive/HBase/Spark in various formats like parquet/avro NoSql data storage using Couchbase
- Big Data Architect at Accenture
- Information Technology Associate at Tata Consultancy Services (TCS)
5 years, 6 months at this Job
- Bachelor's - Electrical Engg
Designed and developed ETL pipelines using Airflow and Kylo/NiFi
● Hive Data Lake
● Lead architect to drive vision for next generation BI & Analytics built using AWS
● Optimized queries utilizing OLAP cubes using Apache Kylin
● Hortonworks HDP (Ambari), AWS Infrastructure, Terraform, Airflow
● Created and maintained infrastructure for use by BI engineers and product SMEs
- Big Data Architect at Hobsons, Inc
- Reference HR Admin at Applied Technical Systems, Inc
- Software Engineer/ Team Lead at Compu-Tech, Inc
- Full time Student at Central Washington University
2 years, 8 months at this Job
- - Computing Sciences
- B.S. - Computer Science
• Architecting and Implementing with hands-on development on Big Data/BI solutions on Hortonworks HDP platform using tools ◦ Data Ingestion (Sqoop, Strom, Kafka) ◦ Data Storage (HDFS, HBase, Hive) ◦ Data Transformation & Processing (Hive, Python, Spark, Scala) ◦ Data Catalogs, Security & Governance (Collibra) ◦ Data Visualization (Web Focus/BO)
• Meeting with business stakeholders and other technical team members to gather and analyze Big Data Analytics requirements
• Designing, maintaining and building conceptual, logical and physical database models.
• Transform business requirements into Big Data technical specifications.
• Develop prototypes and proof of concepts for specified business challenges.
• Design and develop data reporting solutions or pipelines using modern languages and agile development methods.
• Develop interfaces to acquire and publish data sets for defined use case(s).
• Write custom classes, functions, procedures, problem management, library controls and reusable components.
• Develop code using Big Data technology tools (Hive, Python, Spark, Scala) to deliver Big Data solutions.
• Perform unit and functional testing for developed code or applications.
• Perform code review and recommend enhancements that improve efficiency, performance and stability.
• Hands-on Big Data experience also included: ° Data modeling through Visio/ERWIN ° Data Streaming & integration data tools and techniques such as Kafka, NiFi and Stoam to ingest the data HDP cluster. ° Creation of Hive tables and load data to Hadoop clusters ° Data cataloging/Data Governance through Collibra ° Data Transformation through Hive and Spark/Scala
- Big Data Architect/Engineer at Ford Motor Company
- Big Data Architect/Engineer/Data Modeler at Delta Airlines
- Big Data/Business Intelligence Manager/Architect at MasterBrand Cabinets Inc
- Technical Architect at Consolidated the EDI
5 months at this Job
- Bachelor of Science - Computer Science and Engineering
• Maintain and support SQL Server database for the United States Department of Defense DPAS application via Leidos Corp
• Architect a relational database model using Erwin and SQL Server Azure to track Military Inventory changes. T-SQL and SSIS was used to replace C## code to record and report the fluctuation of inventory items.
• Worked in a team environment to develop the beginning phase of an Azure Data Lake with imported log files. Created Panda structures for Data analytics using Python.
• Create, revise and test Oracle scripts using SQL Developer and SQL Plus.
• Troubleshoot and performance tuned SQL databases, T-SQL and database applications using extended events and dynamic management views.
• Developed a data transformation process that consisted of Oracle and SQL Server database platforms to load, filter and scan BLOB data onto a SQL database. T-SQL and PL/SQL was utilized to complete the delivery of this project. Developed SSRS reports to report on the data as well as the execution results.
• Wrote technical documentation for database compression/fragmentation and other maintenance jobs of the organizations' databases.
• Troubleshoot and performance tuned SSAS Multidimensional databases using DMV's and Windows event counters.
• Created Inventory dashboards using SSAS , SQL Server and Power BI.
• Managed U.S government databases using SQL Server Data-tier applications through a series of periodic DACPAC releases. DAC were used for managing database configuration(CMDB).
• Developed complex reports for Inventory data using Crystal Reporting, SSRS and Tableau. T-SQL and some Python was used to develop data sets used in the reports.
• Used Git Repository via Visual Studio and Git Bash to maintain versions and releases of the DACPAC and the objects that define it.
- Lead Big Data Architect at TEKsystems Consulting
- Lead DBA/Developer at Denzel Group Consulting
- Senior Data Warehouse/BI Architect at Wellspan Health
- Application Developer at Penn National Insurance
1 year, 3 months at this Job
- M.S. - Secondary Education
- B.S. - May
Prabir is currently the Big Data Architect and Hadoop/Hortonworks Data Platform Subject Matter Expert for the Health Services Data Warehouse (HSDW) project for the US Department of Defense Health Agency (DHA). The project encompasses a wide range of tasks related to a massive IT modernization and consolidation effort including implementation of a Hortonworks Data Platform based Hadoop Data Lake to server as the central Health Data Repository for the Health Informatics program. As part of the core implementation team, Prabir provides direct input and hands-on implementation guidance related to Big Data Architecture, Data Security, Data Engineering, and core Hadoop Platform Services including Spark, Hive, Kafka, Ranger, Knox, Sqoop, NiFi, Python, Zeppelin etc. running on a Hortonworks Data Platform (HDP).
- Big Data Architect at CACI International Inc
- Principal Cloud Developer at Sotera Defense Solutions
- Premier Hadoop Engineer at Cloudera/Hortonworks
- Senior Hadoop Engineer at GEICO Insurance Company
1 year, 11 months at this Job
- M.S. in Computer Science - Computer Science
- M.S in Software Development and Analysis - Software Development and Analysis
Description: iLog is the front end application advisors can use to create the Repair case for iPhone/iPad and Mac systems via phone/Email or chat. Responsibilities:
• Participated in requirement gathering sessions with client business owners, and provided inputs in to creation of Functional Requirement Documents.
• Design and develop the framework components
• Responsible for application architecture
• Provide solution to the problems in the project
• Written the sections of the Technical Architecture document and System Design Documents
• Work with the team to resolve the technical problems
• Conducting code reviews
• To make sure the team is following the proper coding standards and architecture guidelines. (SONAR is used as tool to monitor the code coverage and to monitor the coding standards)
• Define and Review delivery objectives, operations metrics, project schedule, timeline, status and manage IT service improvement initiatives.
• Create the Sprints and break down the tasks.
• Work on Agile Methodology. Created the Radars based on the Work priority.
• Capacity Planning
• Define and Review the application architectural design for the new requirements
• Create the InfoSec, App2App and Caesar requests based on project Needs.
• Acls to create and validate the server connectivity.
• Coordination with Customer & Sr. Management.
• Daily and Weekly Project reviews with client.
• Determining the resource requirements and hiring the required resources for the project teams.
• Proposal preparations based on client requirement.
• Mentoring and training.
• Prepare the Project Plans.
• Design and implement scalable Big Data architecture solutions for iLog application needs.
• Analyze multiple sources of structured and unstructured data to propose and design data architecture solutions for scalability, high availability, fault tolerance, and elasticity.
• Develop conceptual, logical and physical design for various data types and large volumes.
• Architect, design and implement high performance large volume data integration processes, database, storage, and other back-end services in fully virtualized environments.
• Work closely with customers, at a technical and user level, to design and produce solutions.
• Work closely with the product management and development teams to rapidly translate the understanding of customer data and requirements to product and solutions.
• Create the topic and establish the connection with Brokers and read the message from topic.
• Work with config and server properties.
• Create the kafka clusters and launch the clusters. Environment: iOS 10, Java, Webservices, Hadoop, Scala, Spark, Json, Cocoa, Splunk, Radar, Agile model(Scrum), Mongo DB. Tools: Eclipse (Scala IDE),
- Java, Big Data Architect at Apple Inc
- Onsite Delivery Lead/coordinator at Staples Inc
- Programmer, Sr Java Developer and Onsite coordinator at Nielsen Media Research
- Java UI Developer at British Telecom
2 years, 1 month at this Job
• Developed Scala scripts, UDF's using both Data frames/SQL and RDD/MapReduce in Spark 2.0.0 for Data Aggregation, queries and writing data back into RDBMS through Sqoop.
• Developed Spark code using Scala and Spark-SQL/Streaming for faster processing of data
• Developed Oozie 3.1.0 workflow jobs to execute hive 2.0.0, sqoop 1.4.6 and map-reduce actions.
• Design big data authentication solution using LDAP/Kerberos and Authorization using UNIX groups and HDFS ACLs.
• Architect the Hadoop data security using DEZ, Data Encryption Zone, and control data access using UNIX groups and HDFS ACLs.
• Architect/design the Integration solution using Oozie coordinators and Control M.
• Worked with the technology manager and business stake holders to demonstrate the strategic value of the data lake platform.
• Lead the platform and data migration from Big Insight 4.1 to Big Insight 4.2.
• Designed code generation framework using UNIX shell and python to automate the Hadoop code artifacts (BigSQL, hive, HBase, Oozie coordinator, Oozie workflow).
• Designed data analysis and visualization using BigSQL DSM and IBM Big Sheet.
• Worked with the IBM Bluemix support to solve the platform issue and apply required patches. Technology Used: IBM BigInsight 4.1/4.2,IBM InfoSphere Datastage, Python, shell script, Hive, HBase, BigSQL, Spark, Pig, BigSheet, IBM DSM, Control-M, Oozie, DB2, OS/360, Linux
- Sr. Big Data Architect at
- Sr. Data Architect at Veracity Englewood C
- Sr. Enterprise Data and Big Data Architect at Belk, Inc
- Integration/Data Architect at BJ's Wholesale Club
2 years, 6 months at this Job
- MS in Computer Science - Computer Science