Agreed this is not properly normalized and it's gonna make it harder to work with your data in the future. If you need a range of years for a particular record, and it isn't contiguous, it's probably better to make another table with id and year stored in it and joining to it. With proper indexing such a table will be small and fast.
Might be better to look at dates or timestamps to hold these data as well. Storing data as the type that it is, or the closest match in your db, is often much better when it comes time to manipulate the data. I wanna see all the dates in the last 20 to 10 years. Group it by date, if anything other than years is getting stored.
If you store your years as a date type format '2009-01-01' then you've got a year you can do date math on, and it can be (optionally) checked by the db as a proper date so you don't get 'mickey mouse' as a year accidentally.
Also there's size I guess, but I don't know if a date is smaller than a 4 character text ( char(4), a y10k problem waiting to happen 🙂 ). Timestamps are 64 bit in pgsql. Text / Varchar is 4+5 or 6 bytes, char is x bytes if ascii, so I'm pretty sure dates are either the same size as timestamps or smalle. With a 64 bit machine's alignment there may be no savings on disk space using date types in place of string or char(4). Indexes on the join table shouldl be smaller, and therefore faster. But mostly it's just the right thing to do with your data to make sure you're always getting the right answer.