Available for new projects

Data Engineer & Software Engineer

I help teams turn raw data into reliable insights and systems they can trust.

Cloud-native pipelines, automation, and backend engineering — designed for correctness, maintainability, and clear ownership.

7+
Years in Software Engineering
4+
Data Pipeline Projects
AWS
Cloud & Lakehouse
Kristoffer Kero
Based in Texas • Available for contract work

What I Do

Areas where I’ve built hands-on experience across data engineering and software systems

Data Engineering & Pipelines

Serverless data pipelines built using AWS-native services (Lambda, Glue, Step Functions), following production-style patterns such as medallion architectures, incremental processing, and explicit orchestration boundaries.

Event-Driven Architecture

Event-driven serverless workflows using API Gateway, SQS, and Lambda. Experience with webhook ingestion, message buffering, retry logic, and error handling in asynchronous distributed systems.

Analytics & Data Warehousing

Analytics workflows built on Snowflake and Python, combining SQL-based warehouse querying with Pandas transformations and interactive visualizations to support exploratory analysis and reproducible reporting.

PySpark & Batch Processing

Distributed batch processing using AWS Glue and PySpark, including schema enforcement, partition-aware transformations, and optimization strategies for non-trivial data volumes.

CI/CD & Infrastructure Automation

CI/CD pipelines built with GitHub Actions, including OIDC-based authentication, automated testing, and idempotent deployment workflows to support reliable iteration and safe changes.

API Integration & Backend Systems

Backend and API development involving RESTful services, third-party integrations (Stripe, Slack, Email, CRM systems), secure secrets handling, and webhook-based workflows.

Why Work With Me

I bring senior-level technical leadership combined with hands-on implementation skills. My approach emphasizes reliability, clarity, and long-term system health over quick fixes.

AWS
🔧

Modern Tech Stack

Cloud-native, battle-tested tools

1

Proven Experience in Distributed Systems

Years of work in telecom automation and DevOps have given me deep understanding of complex, interconnected systems.

2

End-to-End Ownership

Built entire systems alone — including SaaS products, CI/CD pipelines, data platforms, and orchestration workflows.

3

Strong Engineering Discipline

Prioritize clarity, maintainability, observability, and correctness. Systems should be easy to understand, debug, and improve.

4

Full-Stack Breadth with Data Engineering Depth

Background spans SWE, DevOps, and DE — enabling holistic problem-solving across the entire stack.

5

Team Integration or Independent Operation

Deliver results with minimal oversight while maintaining professional communication and clear documentation.

6

Professional Integrity & Reliability

Take pride in doing things right. Detail-oriented, honest, and principled in all engineering decisions.

Featured Expertise

A comprehensive skill set built through years of hands-on experience

Data Engineering & Processing

AWS Glue & PySparkAWS LambdaStep Functions OrchestrationMedallion ArchitectureIncremental ProcessingSchema ValidationParquet & Data FormatsDynamoDB Watermarking

Cloud Data Warehouses & SQL

SnowflakeSQLAlchemySQL Query OptimizationAthena & Glue CatalogWarehouse ConnectivityData ModelingAnalytical SQLCloud Data Architecture

Python & Analytics

Python DevelopmentPandas AnalysisPlotly VisualizationData TransformationExploratory AnalysisReproducible WorkflowsRESTful API DesignError Handling & Retries

AWS Serverless & CI/CD

Lambda FunctionsAPI GatewayAmazon SQSGitHub ActionsOIDC AuthenticationInfrastructure as CodeAutomated TestingCloudWatch Monitoring
AWS
Cloud Platform
Snowflake
Data Warehouse
Python, SQL & Java
Primary Languages
PySpark
Data Processing

Featured Projects

Production systems demonstrating end-to-end data engineering excellence

AWS cloud infrastructure visualization showing distributed data processing
1

Wistia Video Analytics Pipeline

Fully automated AWS-native data pipeline ingesting daily video analytics from Wistia API. Processes data through Bronze/Silver/Gold medallion architecture with incremental updates and DynamoDB watermarking.

AWSStep FunctionsPySparkAthenaLambdaDynamoDB
  • Medallion architecture
  • Incremental processing
  • CI/CD with GitHub Actions
Snowflake data warehouse connected to analytics dashboard with charts
2

Walmart Sales Analytics Pipeline

Python-based analytics workflow connecting to Snowflake for retail sales analysis. SQLAlchemy queries with Pandas transformations and interactive Plotly visualizations exploring seasonal trends.

SnowflakeSQLAlchemyPythonPandasPlotly
  • Programmatic warehouse access
  • Exploratory analysis
  • Reproducible workflows
Serverless architecture with AWS Lambda functions and event flow
3

Real-Time CRM Lead Pipeline

Event-driven serverless pipeline capturing CRM webhooks, enriching lead data with owner information, and delivering real-time Slack notifications with 10-minute SQS buffering.

LambdaSQSAPI GatewayS3Slack
  • Sub-minute latency
  • Idempotent processing
  • Secure secrets management

Let's Build Something Great Together

I’m open to working across a wide range of projects, short or long term, exploratory or production-focused, spanning data engineering, backend systems, and cloud-native architecture. I’m comfortable owning systems end-to-end, and I’m equally open to growing into a role where the scope or technology is still evolving. If the problem is interesting and the expectations are clear, I’m happy to explore how I can contribute.

Especially interested in teams that value clarity, ownership, and long-term system quality.

Quick Response

I aim to reply within 1–2 business days

Flexible Engagement

Project-based or ongoing contracts

Global Availability

Based in Texas, open to remote collaboration