Beer drinker, cigar smoker, web developer, and non-juggler.
319 stories
·
3 followers

Portland Is in the Midst of a Barmageddon

1 Comment
The outside of Crush Bar during EastSide Pride in 2019.
Crush Bar in 2019. | Photos courtesy of Smirnoff

This year has brought a barrage of devastating closures to Portland’s vibrant bar scene. Crush Bar is the latest to go.

At the end of 2024, one of Portland’s foundational queer bars, Crush Bar, will close after 23 years in business. Crush Bar is the latest third place to fall during Portland’s 2024 barmageddon, a seemingly nonstop barrage of bar closures as operators struggle with rising costs, falling attendance, a flattening nightlife scene, and other woes related to running a business.

This isn’t the first brush with closure for Crush Bar. At the end of 2023, the bar announced that it would close before it narrowly avoided that fate through an investment from a staff member. Although the staff member was supposed to take over permanently to allow owner John “Woody” Clarke to retire, dwindling sales made a future for the bar less possible. “The decision to close is ultimately motivated by Woody’s retirement from the bar industry in combination with a rough economic climate,” a representative for Crush wrote in a statement to Eater Portland. “It is the end of an era and sad to lose another queer bar in a time going forward when we all need community the most.”

Although Crush has remained a fixture in Portland’s bar scene, some soured on the bar in 2020 as police shut down an employee protest to demand payment for accrued sick days and guaranteed rehires after the bar reopened. After a complaint was filed with the National Labor Relations Board, Clarke settled with the employees.

In mid-October, another Portland queer bar, Sissy Bar, put out a statement saying it would be closing after Halloween. “Unfortunately, due to the devastating economic and social disruption caused by the pandemic, we have made the difficult decision to close our doors,” a post from the bar on October 13 reads. In the same month, psychedelic Portland bar the Houston Blacklight suddenly called it quits after opening in July 2023. During its short tenure, Eater Portland gave the Houston Blacklight its Eater Award for Best New Bar, and in 2024, Bon Appetit named the Houston Blacklight among its list of the best new bars in the country. But even a series of accolades wasn’t enough, as owners Mariah and Thomas Pisha-Duffly wrote in an Instagram post that the bar was never “financially sustainable.”

The beaded curtain at the Houston Blacklight. Molly J. Smith/Eater Portland
The Houston Blacklight.

October also saw the closure of Bar Asha and XO Bar from influential restaurateur Sanjay Chandrasekaran, as well as acclaimed Northeast Portland cocktail bar Cereus. In an open letter following the closure of Cereus, owner Bradley Stephens authored an open letter to the city of Portland, saying the closure was due to “the insane increased cost of operating a restaurant, bank headaches, and the often unpredictable consistency of when guests would go out for dinner or drinks.”

Although fall often brings less business to restaurants, Portland’s barmageddon extends past the colder months. In August, Mt. Tabor Brewing closed after 13 years and Apex poured its last beers. Earlier in the summer, often peak season for brewing, Cascade Brewing poured its last beers after 25 years in business. In May, Buckman neighborhood pub There Be Monsters closed after almost a decade in business.

But the year hasn’t been all bad news for Portland’s bars. Amid the year of unprecedented inflation, sky-rocketing labor and overhead costs, and other struggles that made managing a business more untenable, bars like Silk Road, Bauman’s on Oak, and Living Room Wines have opened their doors, bringing new life to the city’s cocktail, wine, beer, and cider scene.

Still, Portland’s new bars will have to face many of the same challenges that caused so many others to close. Nearly two months after closing Cereus, Stephens says it’ll be quite some time before he owns another bar, but he still has hope for others. “I think we’ll get there, but it’s going to be a hot minute,” he tells Eater Portland. He attributes the difficulties bars are currently facing to a lack of access to low-interest loans, difficulty competing for attention with venues that have larger marketing budgets, and an overall drop off in local and traveler dining spend.

“People go out less now too, because of many reasons, but the rise of the cost of going out is a big one,” he says. “It would be huge if people tried to make small businesses a top priority when they can afford to go out.”

Update: December 2, 4:10 p.m. PST: The story was updated to include additional context about Crush’s closure.

Read the whole story
AaronPresley
11 days ago
reply
Oh so charging $8 for pints and $15 for bad cocktails isn't sustainable? 🙄
Portland, OR
Share this story
Delete

Is Hood River’s Smoked Meat Destination, Grasslands Barbecue, Worth the Drive?

1 Comment
A tray full of brisket, sausage, pork belly, chicken, pulled pork, and pickles.
A tray of barbecue from Grasslands. | Grasslands Barbecue

The Texas-inspired barbecue cart has earned local praise and plenty of word-of-mouth buzz

Welcome to Worth the Drive, a column in which Team Eater Portland ventures to far-flung destination restaurants around the state to see if they live up to the hype. Know a restaurant you think is worth the drive? Let us know via the tipline.


The restaurant: Grasslands Barbecue

The location: Waterfront Park in Hood River; about an hour drive from Portland.

The price tag: Varies, since customers at Grasslands build meals by ordering meats by the half-pound and choosing a la carte sides like a Tex Mex slaw (cabbage, kale, Cotija, and pumpkin seeds) and roasted green chile mac and cheese.

The schtick: The cart serves Texas-ish style barbecue smoked with neutral-flavored Oregon white oak that allows the proteins to really shine — pork belly burnt ends glazed with a ginger-soy-serrano vinaigrette, black pepper and lemon chicken, house-made hatch chile-and-cheddar sausage. Grasslands is only open on weekends, serving its 18-hour brisket on Saturdays, tri-tip on Sundays, and weekly specials.

“All of us view barbecue as something that [shouldn’t] necessarily be available 24/7,” Marquis says. “It’s a special thing — it takes so long to cook and it’s a process.”

For the summer, the cart has also added a Friday lunch service, serving smoked burgers. Grasslands’ rotating sides are often on the lighter side, balancing out the barbecue. On this part of the menu, Grasslands takes fresh Hood River produce from farms like Costarossa, Killer Tomato, and Saur and turns it into dishes like tomato salad with colatura di alici and salsa verde potato salad with sugar snap peas.

The chefs: Drew Marquis, Brendon Bain, and Sam Carroll are three friends “who got the barbecue bug.” Marquis and Bain are restaurant industry vets while Carroll was formerly the head brewer at Occidental Brewing. The three chefs eschew the term “pitmaster” — they share duties evenly, from manning the smoker to hand-making sausage to prepping sides.

Three men stand in front of a food truck with forested hills behind. Grasslands Barbecue
Drew Marquis, Sam Carroll, and Brendon Bain.

The history: A barbecue spot in Hood River was always the goal for Marquis, who launched his Bootleg Barbecue pop-up in Seattle before linking up with Bain, his old college friend, and Carroll, whom he met through working at Pike Place Market’s DeLaurenti. Marquis had planned to spend a year gaining experience working at Wood Shop BBQ before his job was eliminated due to the pandemic.

The project that would evolve into Grasslands started small, with Marquis testing his barbecue on friends before selling it vacuum-sealed to customers, and then, doing pop-ups at Holy Mountain Brewing. The trio took that momentum and relocated to the Hood River area, where they did pop-ups at farmers markets and in Portland while they waited for their food truck to get built out. The truck opened for business in 2021.

The experience: While the line may be long, you’ll spend some of that time deciding what to order. If you’re set on ordering the entire menu to get the whole hog experience, as it were, round up a few friends to help you crush the spread. From ordering to dining, the whole Grasslands experience feels like a lawn party. There are kids and dogs running around and, thanks to Hood River’s open container laws, you can enjoy a brew from Ferment Brewing Company, which sets up a little beer cart right on the lawn. It’s a vibe.

“With the Texas barbecue experience especially, there’s a line, there’s music playing,” Marquis says. “We encourage people to hang out.”

Worth the drive? Absolutely. With the cart’s tight-knit team making everything in-house, the standard Grasslands holds itself to translates to consistently delicious barbecue. The menu also finds places to showcase Hood River’s farm culture, bringing a Pacific Northwest flair to its Texas-style foundation.

Plus, the drive to Hood River makes for the perfect day trip — far enough to feel completely removed from the city but close enough that it’s not a huge schlep. Our pro-tip is to go early and make a day of it; kayak the river, visit farms around the Fruit Loop, or simply lounge in a post-barbecue feast haze by the waterfront.

Read the whole story
AaronPresley
165 days ago
reply
Agreed, some of the best brisket I've had in the PNW. The other being Matt's BBQ of course.
Portland, OR
Share this story
Delete

How to Build a Translation Pipeline

1 Share

Last post I wrote about the business case for a translation pipeline, and why keeping it vendor-agnostic is the ideal scenario.

Now let’s talk about the how. Someday I’ll write more about why you should trust my opinion on this topic, but for now you’ll just have to believe that I have experience in this exact thing.

It needs Projects and Keys.

All TMS Vendors basically have a notion of Projects and Keys. Our translation pipeline will need to have these concepts as well.

Project

This is basically a way to group relevant keys. If you’re a development team, each codebase likely has its own project (but not always).

A project always has a source language (the language your company authors content in), and target languages (the languages to which you’re translating this group of keys into).

Keys

This is a single translatable item. There’s really not a one-size-fits-all on what a key is supposed to be.

If you’re a marketing team, a key could be 1 single blog post or you could make individual keys out of each paragraph. If you’re a development team translating an interface then a key could be a button labels, form field placeholders, or really anything with text that will be read by an end user.

What’s most relevant for our purposes is that each key has three important parts:

  • Unique ID – call it a slug, id, or whatever makes sense. It can be words, emoji, numbers. I can by dynamically created or it can be something specific. What’s important is that it’s unique within your project.
  • Original Value – this is the actual content that will get translated into multiple languages. It can be a lot of text, or a little. It can include HTML, or not. Keep in mind that non-technical translators will be working with this content, so if it has a lot of HTML and variables then it will increase the number of mistakes during the translation process.
  • Description – this is a one-sentence description of where and how this specific key is being used. Providing this to your translator will give them the context they need to provide the most correct translation.

It Doesn’t Have to be Insanely Performant

You read that correctly. There is never an instance that you should be making an API call to your TMS on each page load.

This could be its own blog post someday, but all translations should be coming from a local file in your repo, or a cache, or if you must something like S3.

What this means for us is that we could build an incredibly resilient translation pipeline that takes up relatively few resources. It would use SNS + SQS (or similar) to make decisions on changed strings, and it would push compiled translations into S3 or similar. I’ll be discussing this in more technical detail in future posts.

It Needs A Lot of TMS Connectors

This is really the most complicated aspect of a translation pipeline: pushing and pulling translatable content into and out of TMS vendors.

Some vendors have clearly documented API’s, others don’t. Some have very strict rate limiting (for reasons mentioned in the above section). Your code has to be able to retry intelligently and have a ton of logging to help developers debug any issues.

Nice-to-Haves

There are plenty more features that would help a company more easily translate their content. Here are a few that come to mind.

Test Projects

When a development team is testing an integration, they don’t want to wait for a key to push through SNS, SQS, into a TMS Vendor, and back again. They want a quick cycle to confirm their API’s are working as expected.

This is where it’s useful to have a Test Mode for a Project that skips the TMS and simply applies Pseudo Translation to each key.

Lots and Lots of SDK’s

As you can imagine, there are a lot of small tools and scripts that could simplify the process of a team integrating with a specific Project. The most common being a way to map an incoming language with a supported language.

For example, some devices might have an Accept-Language header of en-US but your project supports en_BG. There are a lot of small edge cases like this that are easily fixed with a few utilities

Read the whole story
AaronPresley
288 days ago
reply
Portland, OR
Share this story
Delete

Potluck: Dynamic documents as personal software

1 Share
Gradually enriching text documents into interactive applications
Read the whole story
AaronPresley
770 days ago
reply
Portland, OR
Share this story
Delete

Announcing ICU4X 1.0

1 Share
ICU Logo

 


I. Introduction

Hello! Ndeewo! Molweni! Салам! Across the world, people are coming online with smartphones, smart watches, and other small, low-resource devices. The technology industry needs an internationalization solution for these environments that scales to dozens of programming languages and thousands of human languages.


Enter ICU4X. As the name suggests, ICU4X is an offshoot of the industry-standard i18n library published by the Unicode Consortium, ICU (International Components for Unicode), which is embedded in every major device and operating system.


This week, after 2½ years of work by Google, Mozilla, Amazon, and community partners, the Unicode Consortium has published ICU4X 1.0, its first stable release. Built from the ground up to be lightweight, portable, and secure, ICU4X learns from decades of experience to bring localized date formatting, number formatting, collation, text segmentation, and more to devices that, until now, did not have a suitable solution.


Lightweight: ICU4X is Unicode's first library to support static data slicing and dynamic data loading. With ICU4X, clients can inspect their compiled code to easily build small, optimized locale data packs and then load those data packs on the fly, enabling applications to scale to more languages than ever before. Even when platform i18n is available, ICU4X is suitable as a polyfill to add additional features or languages. It does this while using very little RAM and CPU, helping extend devices' battery life.


Portable: ICU4X supports multiple programming languages out of the box. ICU4X can be used in the Rust programming language natively, with official wrappers in C++ via the foreign function interface (FFI) and JavaScript via WebAssembly. More programming languages can be added by writing plugins, without needing to touch core i18n logic. ICU4X also allows data files to be updated independently of code, making it easier to roll out Unicode updates.


Secure: Rust's type system and ownership model guarantee memory-safety and thread-safety, preventing large classes of bugs and vulnerabilities.


How does ICU4X achieve these goals, and why did the team choose to write ICU4X over any number of alternatives?


II. Why ICU4X?

You may still be wondering, what led the Unicode Consortium to choose a new Rust-based library as the solution to these problems?

II.A. Why a new library?

The Unicode Consortium also publishes ICU4C and ICU4J, i18n libraries written for C/C++ and Java. Why write a new library from scratch? Wouldn’t that increase the ongoing maintenance burden? Why not focus our efforts on improving ICU4C and/or ICU4J instead?


ICU4X solves a different problem for different types of clients. ICU4X does not seek to replace ICU4C or ICU4J; rather, it seeks to replace the large number of non-Unicode, often-unmaintained, often-incomplete i18n libraries that have been written to bring i18n to new programming languages and resource-constrained environments. ICU4X is a product that has long been missing from Unicode's portfolio.


Early on, the team evaluated whether ICU4X's goals could have been achieved by refactoring ICU4C or ICU4J. We found that:


  1. ICU4C has already gone through a period of optimization for tree shaking and data size. Despite these efforts, we continue to have stakeholders saying that ICU4C is too large for their resource-constrained environment. Getting further improvements in ICU4C would amount to rewrites of much of ICU4C's code base, which would need to be done in a way that preserves backwards compatibility. This would be a large engineering effort with an uncertain final result. Furthermore, writing a new library allows us to additionally optimize for modern UTF-8-native environments.

  2. Except for JavaScript via j2cl, Java is not a suitable source language for portability to low-resource environments like wearables. Further, ICU4J has many interdependent parts that would require a large amount of effort to bring to a state where it could be a viable j2cl source.

  3. Some of our stakeholders (Firefox and Fuchsia) are drawn to Rust's memory safety. Like most complex C++ projects, ICU4C has had its share of CVEs, mostly relating to memory safety. Although C++ diagnostic tools are improving, Rust has very strong guarantees that are impossible in other software stacks.


For all these reasons, we decided that a Rust-based library was the best long-term choice.

II.B. Why use ICU4X when there is i18n in the platform?

Many of the same people who work on ICU4X also work to make i18n available in the platform (browser, mobile OS, etc.) through APIs such as the ECMAScript Intl object, android.icu, and other smartphone native libraries. ICU4X complements the platform-based solutions as the ideal polyfill:


  1. Some platform i18n features take 5 or more years to gain wide enough availability to be used in client-side applications. ICU4X can bridge the gap.

  2. ICU4X can enable clients to add more locales than those available in the platform.

  3. Some clients prefer identical behavior of their app across multiple devices. ICU4X can give them this level of consistency.

  4. Eventually, we hope that ICU4X will back platform implementations in ECMAScript and elsewhere, providing a maximal amount of consistency when ICU4X is also used as a polyfill.


II.C Why pluggable data?

One of the most visible departures that ICU4X makes from ICU4C and ICU4J is an explicit data provider argument on most constructor functions. The ICU4X data provider supports the following use cases:


  1. Data files that are readable by both older and newer versions of the code; for more detail on how this works, see ICU4X Data Versioning Design

  2. Data files that can be swapped in and out at runtime, making it easy to upgrade Unicode, CLDR, or time zone database versions. Swapping in new data can be done at runtime without needing to restart the application or clear internal caches.

  3. Multiple data sources. For example, some data may be baked into the app, some may come from the operating system, and some may come from an HTTP service.

  4. Customizable data caches. We recognize that there is no "one size fits all" approach to caching, so we allow the client to configure their data pipeline with the appropriate type of cache.

  5. Fully configurable data fallbacks and overlays. Individual fields of ICU4X data can be selectively overridden at runtime.



III. How We Made ICU4X Lightweight

There are three factors that combine to make code lightweight: small binary size, low memory usage, and deliberate performance optimizations. For all three, we have metrics that are continuously measured on GitHub Actions continuous integration (CI).


III.A. Small Binary Size

Internationalization involves a large number of components with many interdependencies. To combat this problem, ICU4X optimizes for "tree shaking" (dead code elimination) by:


  1. Minimizing the number of dependencies of each individual component.

  2. Using static types in ways that scope functions to the pieces of data they need.

  3. Splitting functions and classes that pull in more data than they need into multiple, smaller pieces.


Developers can statically link ICU4X and run a tree-shaking tool like LLVM link-time optimization (LTO) to produce a very small amount of compiled code, and then they can run our static analysis tool to build an optimally small data file for it.


In addition to static analysis, ICU4X supports dynamic data loading out of the box. This is the ultimate solution for supporting hundreds of languages, because new locale data can be downloaded on the fly only when they are needed, similar to message bundles for UI strings.

III.B. Low Memory Usage

At its core, internationalization transforms inputs to human-readable outputs, using locale-specific data. ICU4X introduces novel strategies for runtime loading of data involving zero memory allocations:


  1. Supports Postcard-format resource files for dynamically loaded, zero-copy deserialized data across all architectures.

  2. Supports compile-time linking of required data without deserialization overhead via DataBake.

  3. Data schema is designed so that individual components can use the immutable locale data directly with minimal post-processing, greatly reducing the need for internal caches.

  4. Explicit "data provider" argument to each function that requires data, making it very clear when data is required.


ICU4X team member Manish Goregaokar wrote a blog post series detailing how the zero-copy deserialization works under the covers.


III.C. Deliberate Performance Optimizations

Reducing CPU usage improves latency and battery life, important to most clients. ICU4X achieves low CPU usage by:


  1. Writing in Rust, a high-performance language.

  2. Utilizing zero-copy deserialization.

  3. Measuring every change against performance benchmarks.


The ICU4X team uses a benchmark-driven approach to achieve highly competitive performance numbers: newly added components should have benchmarks, and future changes to those components should avoid regressing on those benchmarks.


Although we always seek to improve performance, we do so deliberately. There are often space/time tradeoffs, and the team takes a balanced approach. For example, if improving performance requires increasing or duplicating the data requirements, we tend to favor smaller data, like we've done in the normalizer and collator components. In the segmenter components, we offer two modes: a machine learning LSTM segmenter with lower data size but heavier CPU usage, and a dictionary-based segmenter with larger data size but faster. (There is ongoing work to make the LSTM segmenter require fewer CPU resources.)


IV. How We Made ICU4X Portable

The software ecosystem continually evolves with new programming languages. The "X" in ICU4X is a nod to the second main design goal: portability to many different environments.


ICU4X is Unicode's first internationalization library to have official wrappers in more than one target language. We do this with a tool we designed called Diplomat, which generates idiomatic bindings in many programming languages that encourage i18n best practices. Thanks to Diplomat, these bindings are easy to maintain, and new programming languages can be added without needing i18n expertise.


Under the covers, ICU4X is written in no_std Rust (no system dependencies) wrapped in a stable ABI that Diplomat bindings invoke across foreign function interface (FFI) or WebAssembly (WASM). We have some basic tutorials for using ICU4X from C++ and JavaScript/TypeScript.



V. What’s next?

ICU4X represents an exciting new step in bringing internationalized software to more devices, use cases, and programming languages. A Unicode working group is hard at work on expanding ICU4X’s feature set over time so that it becomes more useful and performant; we are eager to learn about new use cases and have more people contribute to the project.


Have questions?  You can contact us on the ICU4X discussion forum!


Want to try it out? See our tutorials, especially our Intro tutorial!

Interested in getting involved? See our Contribution Guide.


Want to stay posted on future ICU4X updates? Sign up for our low-traffic announcements list, icu4x-announce@unicode.org!




Over 144,000 characters are available for adoption to help the Unicode Consortium’s work on digitally disadvantaged languages

[badge]
Read the whole story
AaronPresley
807 days ago
reply
Portland, OR
Share this story
Delete

Rising from the ashes: Stage Manager

1 Share

Every year I worked on macOS/iOS, I would get attached to a handful of features that would ultimately get axed. Over time, I grew de-sensitized to it, but sometimes a feature would come along that I would never be able to get over.

While Apple was transitioning to Intel in …

READ MORE

The post Rising from the ashes: Stage Manager appeared first on Tech Reflect.

Read the whole story
AaronPresley
919 days ago
reply
Portland, OR
Share this story
Delete
Next Page of Stories