Data engineering was software engineering from the very beginning. Then a bunch of business analysts who didn't know anything about writing software got jealous and said that if you knew SQL/DBT you were a data engineer. I've had to explain too many times that yes, indeed, I can set up a CI/CD pipeline or set up kafka or deploy Dagster on ECS, to the point where I think I need to change my title just to not be cheapened.
sdairs [3 hidden]5 mins ago
I think even before dbt turned DE into "just write sql & yaml", there was an appreciable difference in DE vs SE. There was defo some DEs writing a lot of java/scala if they were in Spark heavy co's, but my experience is that DEs were doing a lot more platform engineering (similar to what you suggest), SQL and point-and-click (just because that was the nature of the tooling). I wasn't really seeing many DEs spending a lot of time in an IDE.
But I think whats interesting from the post is looking at SEs adopting data infra into their workflow, as opposed to DEs writing more software.
craneca0 [3 hidden]5 mins ago
yeah, i've seen large fortune 100 data and analytics orgs where the majority of folks with data engineering titles are uncomfortable with even the basics of git.
isaacremuant [3 hidden]5 mins ago
Agreed. Weird distinction to pay less to people who did certain things and you could a high variance between "data engineers". Some who had only done a course and others that had extensive knowledge of software engineering practices were considered the same.
Ridiculous.
mynameisash [3 hidden]5 mins ago
The comments here are... interesting, as they indicate a strong split between analysts and those engineers that can operationalize things. I see another dimension to it all.
My title is senior data engineer at GAMMA/FAANG/whatever we're calling them. I have a CS degree and am firmly in the engineering. My passion, though, is in using software engineering and computer science principles to make very large-scale data processing as stupid fast as we can. To the extent I can ignore it, I don't personally care much about the tooling and frameworks and such (CI/CD, Airflow, Kafka, whatever). I care about how we're affinitizing our data, how we index it, whether and when we can use data sketches to achieve a good tradeoff between accuracy and compute/memory, and so on.
While there are plenty of folks in this thread bashing analysts, one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.
tdb7893 [3 hidden]5 mins ago
I mean this very sincerely but I'm a little lost how data engineering is distinct from software engineering. It seems like just a subset of it, my title was software engineer and I've done what sounds like very similar work.
giantg2 [3 hidden]5 mins ago
I've never really seen the distinction between data and software engineering. It's more like front-end vs backend. If you're a data engineer and it's all no code tooling, then you're just an analyst or something.
flexiflex [3 hidden]5 mins ago
When I worked at bigCo , it was a totally different world. Data engineers used data platform tools to do data work, usually for data’s sake. Software teams trying to build stuff with data had to finagle their way onto roadmaps.
sdairs [3 hidden]5 mins ago
this has been my experience too
SrslyJosh [3 hidden]5 mins ago
"Data engineering and software engineering are converging" says firm selling analytics products/services. I think the perspective here may be a bit skewed.
getnormality [3 hidden]5 mins ago
It's not hard to do data engineering to the standards of software engineering, and many people do it already, provided that
1. You use a real programming language that supports all the abstractions software engineers rely on, not (just) SQL.
2. The data is not too big, so the feedback cycle is not too horrendously slow.
#2 can't ever be fully solved, but testing a data pipeline on randomly subsampled data can help a lot in my experience.
sdairs [3 hidden]5 mins ago
In your experience, how are folks doing (1)? The post is talking about a framework to add e.g. type safety, schema-as-code, etc. over assets in data infra in a familiar way as to what is common with Postgres; I'm not familiar with much else out there for that?
zurfer [3 hidden]5 mins ago
Maybe. On the one side you have something like dbt or Moosestack. On the other hand analytics and data pipelining is still a lot of no code tooling and I doubt it will go away.
However I would love to learn more about how other people use coding agents to do DE tasks.
craneca0 [3 hidden]5 mins ago
agreed on the presence and stickiness of no-code tooling. but in a future where we want to enable LLMs and agents to do as much of that work as possible, a code-first approach seems far more likely to make that effective. not just because agents are better are writing code than clicking through interfaces (maybe that will change as agents evolve?), but because the SDLC is valuable for agents for the same reasons it's valuable for human developers - collaboration, testing, auditing, versioning, etc.
rawgabbit [3 hidden]5 mins ago
In Snowflake, I am now writing Python Stored Procedures that make REST API calls to things like Datadog REST API and dumping the JSON into a Snowflake table. I then unpack the JSON and transform it into a normalized table. So far it works reasonably well. This is possible using Snowflake's external access feature. https://docs.snowflake.com/en/developer-guide/external-netwo...
banku_brougham [3 hidden]5 mins ago
If are orchestrating pipelines in airflow or Prefect you are having to write the client software around those engines, and its a lot of python.
Another anecdatum: the data engineers role at Zillow is called "Software Development Engineer, Big Data"
botswana99 [3 hidden]5 mins ago
Many data teams often find themselves as 'tool jockeys' instead of becoming true engineers. They primarily learn some company data, and then rely on drag-and-drop or YML configuration functionality within the constraints of the tool's environment.
Their organization often insists they must use standard tools, and their idea of a good job is that the task works fine within their personal version. No automatic testing, no automated deployment, no version control, and handcrafted environments. And then they get yelled at when things break and yelled at for taking too long. And most DEs want to quit the field after a few years.
The real question is not that DE and software engineering are converging. It's why most DEs don't have the self-respect and confidence to engineer systems so that their lives don't suck.
rorylawless [3 hidden]5 mins ago
Prefacing this with an acknowledgement that I'm a public sector data analyst by trade so my experience may not be universal.
My view is that it isn't so much a lack of "self-respect and confidence" but an acknowledgment that the path of least resistance is often the best one. Often data teams are something that was tacked on as an afterthought and the organizational environment is oriented towards buying off-the-shelf solutions rather than developing things in house.
Saying that, versional control and replicable environments are becoming standard in the profession and, as data professionals become first class citizens in organizations, we may find that orgs orient themselves towards a more production focused environment.
zamalek [3 hidden]5 mins ago
One things have seen through my more recent exposure to experienced data engineers is the lack of repeatability rigor (CI/CD, IaC, etc.). There's a lot of doing things in notebooks and calling that production-ready. Databricks has git (GitHub only from what I can tell) integration, but that's just checking out and directly committing to trunk, if it's in git then we have SDLC right, right? It's fucking nuts.
Anyone have workflows or tooling that are highly compatible with the entrenched notebook approach, and are easy to adopt? I want to prevent theses people from learning well-trodden lessons the hard way.
faxmeyourcode [3 hidden]5 mins ago
This is insane to read as a data engineer who actually builds software. These sound like amateurs, not experienced data engineers to be perfectly honest.
There are plenty of us out here with many repos, dozens of contributors, and thousands of lines of terraform, python, custom GitHub actions, k8s deployments running airflow and internal full stack web apps that we're building, EMR spark clusters, etc. All living in our own Snowflake/AWS accounts that we manage ourselves.
The data scientists that we service use notebooks extensively, but it's my teams job to clean it up and make it testable and efficient. You can't develop real software in a notebook, it sounds like they need to upskill into a real orchestration platform like airflow and run everything through it.
Unit test the utility functions and helpers, data quality test the data flowing in and out. Build diff reports for understanding big swings in the data to sign off changes.
My email is in my profile I'm happy to discuss further! :-)
RobinL [3 hidden]5 mins ago
I think this may be a databricks thing? From what I've seen there's a gap between data engineers forced to use databricks and everyone else. From what I've seen, at least how it's used in practice, databricks seems to result in a mess of notebooks with poor dependency and version management.
zamalek [3 hidden]5 mins ago
Interesting, databricks has been my first exposure to DE at scale and it does seem to solve many problems (even though it sounds like it's causing some). So what does everyone else do? Run spark etc. themselves?
RobinL [3 hidden]5 mins ago
We use aws glue for spark (but are increasingly moving towards duckdb because it's faster for our workloads and easier to test and deploy).
For Spark, glue works quite well. We use it as 'spark as a service', keeping our code as close to vanilla pyspark as possible. This leaves us free to write our code in normal python files, write our own (tested) libraries which are used in our jobs, use GitHub for version control and ci and so on
sdairs [3 hidden]5 mins ago
tbh I see just as much notebook-hell outside of dbx, it's certainly not contained to just them. There's some folks doing good SDLC with Spark jobs in java/scala, but I've never found it to be overly common, I see "dump it on the shared drive" equally as much lol. IME data has always been a bit behind in this area
personally you couldn't pay me to run Spark myself these days (and I used to work for the biggest Hadoop vendor in the mid 2010s doing a lot of Spark!)
esafak [3 hidden]5 mins ago
For CI, try dagger. It's code based and runs locally too, so you can write tests. But it is a moving target and more complex than Docker.
ViewTrick1002 [3 hidden]5 mins ago
That is what dbt solves. Version your SQL and continuously rehydrate the data to match the most recent models.
But I think whats interesting from the post is looking at SEs adopting data infra into their workflow, as opposed to DEs writing more software.
Ridiculous.
My title is senior data engineer at GAMMA/FAANG/whatever we're calling them. I have a CS degree and am firmly in the engineering. My passion, though, is in using software engineering and computer science principles to make very large-scale data processing as stupid fast as we can. To the extent I can ignore it, I don't personally care much about the tooling and frameworks and such (CI/CD, Airflow, Kafka, whatever). I care about how we're affinitizing our data, how we index it, whether and when we can use data sketches to achieve a good tradeoff between accuracy and compute/memory, and so on.
While there are plenty of folks in this thread bashing analysts, one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.
1. You use a real programming language that supports all the abstractions software engineers rely on, not (just) SQL.
2. The data is not too big, so the feedback cycle is not too horrendously slow.
#2 can't ever be fully solved, but testing a data pipeline on randomly subsampled data can help a lot in my experience.
Another anecdatum: the data engineers role at Zillow is called "Software Development Engineer, Big Data"
Their organization often insists they must use standard tools, and their idea of a good job is that the task works fine within their personal version. No automatic testing, no automated deployment, no version control, and handcrafted environments. And then they get yelled at when things break and yelled at for taking too long. And most DEs want to quit the field after a few years.
The real question is not that DE and software engineering are converging. It's why most DEs don't have the self-respect and confidence to engineer systems so that their lives don't suck.
My view is that it isn't so much a lack of "self-respect and confidence" but an acknowledgment that the path of least resistance is often the best one. Often data teams are something that was tacked on as an afterthought and the organizational environment is oriented towards buying off-the-shelf solutions rather than developing things in house.
Saying that, versional control and replicable environments are becoming standard in the profession and, as data professionals become first class citizens in organizations, we may find that orgs orient themselves towards a more production focused environment.
Anyone have workflows or tooling that are highly compatible with the entrenched notebook approach, and are easy to adopt? I want to prevent theses people from learning well-trodden lessons the hard way.
There are plenty of us out here with many repos, dozens of contributors, and thousands of lines of terraform, python, custom GitHub actions, k8s deployments running airflow and internal full stack web apps that we're building, EMR spark clusters, etc. All living in our own Snowflake/AWS accounts that we manage ourselves.
The data scientists that we service use notebooks extensively, but it's my teams job to clean it up and make it testable and efficient. You can't develop real software in a notebook, it sounds like they need to upskill into a real orchestration platform like airflow and run everything through it.
Unit test the utility functions and helpers, data quality test the data flowing in and out. Build diff reports for understanding big swings in the data to sign off changes.
My email is in my profile I'm happy to discuss further! :-)
For Spark, glue works quite well. We use it as 'spark as a service', keeping our code as close to vanilla pyspark as possible. This leaves us free to write our code in normal python files, write our own (tested) libraries which are used in our jobs, use GitHub for version control and ci and so on
personally you couldn't pay me to run Spark myself these days (and I used to work for the biggest Hadoop vendor in the mid 2010s doing a lot of Spark!)